aaronjmars/unirig-ai đŧī¸ â đŧī¸
About
One Model to Rig Them All: Diverse Skeleton Rigging with UniRig
Example Output
Output
Performance Metrics
94.86s
Prediction Time
197.51s
Total Time
Input Parameters
- input_mesh (required)
- Input 3D model (.glb, .obj, .fbx, .vrm)
Output Schema
Output
Example Execution Logs
đ Original input file copied to: /tmp/unirig_prediction/input_copy/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6.glb
âšī¸ Created datalist for run.py at /app/repo/dataset_inference_clean/inference_datalist.txt pointing to 'cog_predict_data_dir'
--- đ Stage 1a: Data Extraction (raw_data.npz) ---
đ ī¸ Creating NPZ for /tmp/unirig_prediction/input_copy/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6.glb at /app/repo/dataset_inference_clean/cog_predict_data_dir/raw_data.npz
Mesh has 267738 faces, simplifying to 50000...
â
NPZ file created at: /app/repo/dataset_inference_clean/cog_predict_data_dir/raw_data.npz
--- đ Stage 1b: Skeleton Prediction ---
đ Running command: python /app/repo/run.py --task /app/repo/configs/task/quick_inference_skeleton_articulationxl_ar_256.yaml in /app/repo
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
Seed set to 123
[92mload task config: /app/repo/configs/task/quick_inference_skeleton_articulationxl_ar_256.yaml[0m
[92mload data config: configs/data/quick_inference.yaml[0m
[92mload transform config: configs/transform/inference_ar_transform.yaml[0m
[92mload tokenizer config: configs/tokenizer/tokenizer_parts_articulationxl_256.yaml[0m
[92mload model config: configs/model/unirig_ar_350m_1024_81920_float32.yaml[0m
Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in OPTForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
[92mload system config: configs/system/ar_inference_articulationxl.yaml[0m
Using bfloat16 Automatic Mixed Precision (AMP)
Using default `ModelCheckpoint`. Consider installing `litmodels` package to enable `LitModelCheckpoint` for automatic upload to the Lightning model registry.
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:76: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
Restoring states from the checkpoint path at /root/.cache/huggingface/hub/models--VAST-AI--UniRig/snapshots/e53ad6237db5cea261727c5d50b67d140a4b9571/skeleton/articulation-xl_quantization_256/model.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from the checkpoint at /root/.cache/huggingface/hub/models--VAST-AI--UniRig/snapshots/e53ad6237db5cea261727c5d50b67d140a4b9571/skeleton/articulation-xl_quantization_256/model.ckpt
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:425: The 'predict_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=9` in the `DataLoader` to improve performance.
Predicting: | | 0/? [00:00<?, ?it/s]
Predicting: 0%| | 0/1 [00:00<?, ?it/s]
Predicting DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/torch/backends/cuda/__init__.py:342: FutureWarning: torch.backends.cuda.sdp_kernel() is deprecated. In the future, this context manager will be removed. Please see, torch.nn.attention.sdpa_kernel() for the new context manager, with updated signature.
warnings.warn(
./dataset_inference_clean/cog_predict_data_dir/predict_skeleton.npz
FBX export starting... './dataset_inference_clean/cog_predict_data_dir/skeleton.fbx'
export finished in 0.1123 sec.
Predicting DataLoader 0: 100%|ââââââââââ| 1/1 [00:03<00:00, 0.27it/s]
Predicting DataLoader 0: 100%|ââââââââââ| 1/1 [00:03<00:00, 0.27it/s]
â
Command finished successfully.
â
Skeleton FBX internally generated at /app/repo/dataset_inference_clean/cog_predict_data_dir/skeleton.fbx and copied to /tmp/unirig_prediction/skeleton_fbx_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_skeleton.fbx
â
predict_skeleton.npz found at /app/repo/dataset_inference_clean/cog_predict_data_dir/predict_skeleton.npz
--- đ¨ Stage 2: Skinning Weight Prediction ---
đ Running command: python /app/repo/run.py --task /app/repo/configs/task/quick_inference_unirig_skin.yaml --output /tmp/unirig_prediction/skinned_fbx_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_skinned.fbx in /app/repo
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
Seed set to 123
[92mload task config: /app/repo/configs/task/quick_inference_unirig_skin.yaml[0m
[92mload data config: configs/data/quick_inference.yaml[0m
[92mload transform config: configs/transform/inference_skin_transform.yaml[0m
[92mload model config: configs/model/unirig_skin.yaml[0m
WARNING: use BatchNorm in ptv3obj !!!
[92mload system config: configs/system/skin.yaml[0m
Using bfloat16 Automatic Mixed Precision (AMP)
Using default `ModelCheckpoint`. Consider installing `litmodels` package to enable `LitModelCheckpoint` for automatic upload to the Lightning model registry.
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:76: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
Restoring states from the checkpoint path at /root/.cache/huggingface/hub/models--VAST-AI--UniRig/snapshots/e53ad6237db5cea261727c5d50b67d140a4b9571/skin/articulation-xl/model.ckpt
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py:282: Be aware that when using `ckpt_path`, callbacks used to create the checkpoint need to be provided during `Trainer` instantiation. Please add the following callbacks: ["ModelCheckpoint{'monitor': 'val_loss_sum', 'mode': 'min', 'every_n_train_steps': 0, 'every_n_epochs': 1, 'train_time_interval': None}"].
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from the checkpoint at /root/.cache/huggingface/hub/models--VAST-AI--UniRig/snapshots/e53ad6237db5cea261727c5d50b67d140a4b9571/skin/articulation-xl/model.ckpt
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:425: The 'predict_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=9` in the `DataLoader` to improve performance.
Predicting: | | 0/? [00:00<?, ?it/s]
Predicting: 0%| | 0/1 [00:00<?, ?it/s]
Predicting DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/torch/backends/cuda/__init__.py:342: FutureWarning: torch.backends.cuda.sdp_kernel() is deprecated. In the future, this context manager will be removed. Please see, torch.nn.attention.sdpa_kernel() for the new context manager, with updated signature.
warnings.warn(
/app/repo/src/model/unirig_skin.py:359: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
'offset': torch.tensor(batch['offset']),
/app/repo/src/model/unirig_skin.py:423: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
skin_pred[i, :, :num_bones[i]] = F.softmax(pred)
FBX export starting... '/tmp/unirig_prediction/skinned_fbx_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_skinned.fbx'
(bpy.data.armatures['Armature'], 'POSE')
export finished in 0.1652 sec.
Predicting DataLoader 0: 100%|ââââââââââ| 1/1 [00:01<00:00, 0.53it/s]
Predicting DataLoader 0: 100%|ââââââââââ| 1/1 [00:01<00:00, 0.53it/s]
â
Command finished successfully.
â
Skinned model saved to: /tmp/unirig_prediction/skinned_fbx_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_skinned.fbx
--- ⨠Stage 3: Merging Results ---
đ Running command: bash /app/repo/launch/inference/merge.sh --source /tmp/unirig_prediction/skinned_fbx_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_skinned.fbx --target /tmp/unirig_prediction/input_copy/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6.glb --output /tmp/unirig_prediction/merged_glb_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_rigged.glb in /app/repo
FBX version: 7400
11:52:23 | INFO: Data are loaded, start creating Blender stuff
11:52:23 | INFO: Blender create Mesh node 0
11:52:23 | INFO: glTF import finished in 0.25s
tripo_node_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6
/app/repo/src/inference/merge.py:265: RuntimeWarning: invalid value encountered in divide
vertex_group_reweight = vertex_group_reweight / vertex_group_reweight[..., :group_per_vertex].sum(axis=1)[...,None]
0%| | 0/140641 [00:00<?, ?it/s]
6%|â | 9035/140641 [00:00<00:01, 90341.69it/s]
13%|ââ | 18199/140641 [00:00<00:01, 91098.65it/s]
20%|ââ | 27486/140641 [00:00<00:01, 91906.48it/s]
26%|âââ | 37042/140641 [00:00<00:01, 93347.33it/s]
33%|ââââ | 46582/140641 [00:00<00:00, 94084.39it/s]
40%|ââââ | 56133/140641 [00:00<00:00, 94565.65it/s]
47%|âââââ | 65702/140641 [00:00<00:00, 94931.83it/s]
53%|ââââââ | 75234/140641 [00:00<00:00, 95050.60it/s]
60%|ââââââ | 84740/140641 [00:00<00:00, 93834.07it/s]
67%|âââââââ | 94127/140641 [00:01<00:00, 92775.27it/s]
74%|ââââââââ | 103409/140641 [00:01<00:00, 91333.23it/s]
80%|ââââââââ | 112562/140641 [00:01<00:00, 91389.21it/s]
87%|âââââââââ | 121706/140641 [00:01<00:00, 90814.17it/s]
93%|ââââââââââ| 130791/140641 [00:01<00:00, 89874.13it/s]
99%|ââââââââââ| 139782/140641 [00:01<00:00, 88762.36it/s]
100%|ââââââââââ| 140641/140641 [00:01<00:00, 91707.16it/s]
ERROR Draco mesh compression is not available because library could not be found at /app/repo/4.2/python/lib/python3.11/site-packages/libextern_draco.so
11:53:01 | INFO: Starting glTF 2.0 export
11:53:01 | INFO: Extracting primitive: Mesh_0
11:53:02 | WARNING: More than one shader node tex image used for a texture. The resulting glTF sampler will behave like the first shader node tex image.
11:53:02 | INFO: Primitives created: 1
11:53:02 | INFO: Finished glTF 2.0 export in 1.152613878250122 s
/app/repo/launch/inference/merge.sh: line 31: 1449 Segmentation fault python -m src.inference.merge --require_suffix=obj,fbx,FBX,dae,glb,gltf,vrm --num_runs=1 --id=0 --source=/tmp/unirig_prediction/skinned_fbx_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_skinned.fbx --target=/tmp/unirig_prediction/input_copy/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6.glb --output=/tmp/unirig_prediction/merged_glb_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_rigged.glb
done
â
Command finished successfully.
đ Final rigged model saved to: /tmp/unirig_prediction/merged_glb_output/tmp8ao85pghtripo_pbr_model_8b2ddca8-9dbc-4310-b30c-f8c5f1d1deb6_rigged.glb
Version Details
- Version ID
9ee496eafcc6ab9789a110a6357e43e5ee8b93cee9ab653bdc6f06a29341ee86- Version Created
- June 3, 2025