deepfates/hunyuan-westworld 🔢📝❓✓🖼️ → 🖼️

▶️ 89 runs 📅 Jan 2025 ⚙️ Cog 0.13.6
text-to-video video-lora-training westworld

About

Hunyuan-Video model finetuned on Westworld (2016). Trigger word is "WSTWR". Use "A video in the style of WSTWR, WSTWR" at the beginning of your prompt for best results.

Example Output

Prompt:

"A video in the style of WSTWR, WSTWR The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.
The woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on"

Output

Performance Metrics

153.87s Prediction Time
180.83s Total Time
All Input Parameters
{
  "crf": 19,
  "seed": 12345,
  "steps": 50,
  "width": 640,
  "height": 360,
  "prompt": "A video in the style of WSTWR, WSTWR The video clip depicts a detailed portrait of a woman's face. She has fair skin and bright, intense blue eyes that gaze directly ahead. Her hair is dark and wavy, cascading down her shoulders in a curly pattern. She is dressed in a dark dress with a square neckline adorned with intricate metallic accents. The dress has a detailed, almost mosaic-like pattern, suggesting a ceremonial or formal attire.\nThe woman's expression is stoic and serious, conveying a sense of determination or resolve. The background is dark, contrasting with the intricate details of her dress and the brightness of her eyes. The lighting is soft and diffused, casting a warm glow on",
  "lora_url": "",
  "scheduler": "DPMSolverMultistepScheduler",
  "flow_shift": 9,
  "frame_rate": 16,
  "num_frames": 66,
  "enhance_end": 1,
  "enhance_start": 0,
  "force_offload": true,
  "lora_strength": 1,
  "enhance_double": true,
  "enhance_single": true,
  "enhance_weight": 0.3,
  "guidance_scale": 6,
  "denoise_strength": 1
}
Input Parameters
crf Type: integerDefault: 19Range: 0 - 51
CRF (quality) for H264 encoding. Lower values = higher quality.
seed Type: integer
Set a seed for reproducibility. Random by default.
steps Type: integerDefault: 50Range: 1 - 150
Number of diffusion steps.
width Type: integerDefault: 640Range: 64 - 1536
Width for the generated video.
height Type: integerDefault: 360Range: 64 - 1024
Height for the generated video.
prompt Type: stringDefault:
The text prompt describing your video scene.
lora_url Type: stringDefault:
A URL pointing to your LoRA .safetensors file or a Hugging Face repo (e.g. 'user/repo' - uses the first .safetensors file).
scheduler Default: DPMSolverMultistepScheduler
Algorithm used to generate the video frames.
flow_shift Type: integerDefault: 9Range: 0 - 20
Video continuity factor (flow).
frame_rate Type: integerDefault: 16Range: 1 - 60
Video frame rate.
num_frames Type: integerDefault: 33Range: 1 - 1440
How many frames (duration) in the resulting video.
enhance_end Type: numberDefault: 1Range: 0 - 1
When to end enhancement in the video. Must be greater than enhance_start.
enhance_start Type: numberDefault: 0Range: 0 - 1
When to start enhancement in the video. Must be less than enhance_end.
force_offload Type: booleanDefault: true
Whether to force model layers offloaded to CPU.
lora_strength Type: numberDefault: 1Range: -10 - 10
Scale/strength for your LoRA.
enhance_double Type: booleanDefault: true
Apply enhancement across frame pairs.
enhance_single Type: booleanDefault: true
Apply enhancement to individual frames.
enhance_weight Type: numberDefault: 0.3Range: 0 - 2
Strength of the video enhancement effect.
guidance_scale Type: numberDefault: 6Range: 0 - 30
Overall influence of text vs. model.
denoise_strength Type: numberDefault: 1Range: 0 - 2
Controls how strongly noise is applied each step.
replicate_weights Type: string
A .tar file containing LoRA weights from replicate.
Output Schema

Output

Type: stringFormat: uri

Example Execution Logs
Seed set to: 12345
⚠️  Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements
⚠️  Adjusted frame count from 66 to 65 to satisfy model requirements
�� USING REPLICATE WEIGHTS (preferred method)
🎯 USING REPLICATE WEIGHTS TAR FILE 🎯
----------------------------------------
📦 Processing replicate weights tar file...
🔄 Will rename LoRA to: replicate_82633b3b-74ca-4f4a-9470-65827f216d6c.safetensors
📂 Extracting tar contents...
✅ Found lora_comfyui.safetensors in tar
✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_82633b3b-74ca-4f4a-9470-65827f216d6c.safetensors
----------------------------------------
Checking inputs
====================================
Checking weights
✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae
✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models
====================================
Running workflow
[ComfyUI] got prompt
Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader
Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo
[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
[ComfyUI]
[ComfyUI] Loading checkpoint shards:   0%|          | 0/4 [00:00<?, ?it/s]
[ComfyUI] Loading checkpoint shards:  25%|██▌       | 1/4 [00:00<00:01,  1.65it/s]
[ComfyUI] Loading checkpoint shards:  50%|█████     | 2/4 [00:01<00:01,  1.61it/s]
[ComfyUI] Loading checkpoint shards:  75%|███████▌  | 3/4 [00:01<00:00,  1.64it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00,  2.44it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00,  2.06it/s]
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode
[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 146
[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77
Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect
Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader
[ComfyUI] model_type FLOW
[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Using accelerate to load and assign model weights to device...
[ComfyUI] Loading LoRA: replicate_82633b3b-74ca-4f4a-9470-65827f216d6c with strength: 1.0
[ComfyUI] Requested to load HyVideoModel
[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True
[ComfyUI] Input (height, width, video_length) = (368, 640, 65)
Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler
[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps
[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])[ComfyUI]
[ComfyUI] 0%|          | 0/50 [00:00<?, ?it/s]
[ComfyUI] 2%|▏         | 1/50 [00:02<01:58,  2.43s/it]
[ComfyUI] 4%|▍         | 2/50 [00:04<01:39,  2.07s/it]
[ComfyUI] 6%|▌         | 3/50 [00:06<01:42,  2.18s/it]
[ComfyUI] 8%|▊         | 4/50 [00:08<01:42,  2.22s/it]
[ComfyUI] 10%|█         | 5/50 [00:11<01:41,  2.25s/it]
[ComfyUI] 12%|█▏        | 6/50 [00:13<01:39,  2.27s/it]
[ComfyUI] 14%|█▍        | 7/50 [00:15<01:38,  2.28s/it]
[ComfyUI] 16%|█▌        | 8/50 [00:18<01:36,  2.29s/it]
[ComfyUI] 18%|█▊        | 9/50 [00:20<01:33,  2.29s/it]
[ComfyUI] 20%|██        | 10/50 [00:22<01:31,  2.29s/it]
[ComfyUI] 22%|██▏       | 11/50 [00:24<01:29,  2.30s/it]
[ComfyUI] 24%|██▍       | 12/50 [00:27<01:27,  2.30s/it]
[ComfyUI] 26%|██▌       | 13/50 [00:29<01:25,  2.30s/it]
[ComfyUI] 28%|██▊       | 14/50 [00:31<01:22,  2.30s/it]
[ComfyUI] 30%|███       | 15/50 [00:34<01:20,  2.30s/it]
[ComfyUI] 32%|███▏      | 16/50 [00:36<01:18,  2.30s/it]
[ComfyUI] 34%|███▍      | 17/50 [00:38<01:15,  2.30s/it]
[ComfyUI] 36%|███▌      | 18/50 [00:41<01:13,  2.30s/it]
[ComfyUI] 38%|███▊      | 19/50 [00:43<01:11,  2.30s/it]
[ComfyUI] 40%|████      | 20/50 [00:45<01:09,  2.30s/it]
[ComfyUI] 42%|████▏     | 21/50 [00:47<01:06,  2.30s/it]
[ComfyUI] 44%|████▍     | 22/50 [00:50<01:04,  2.30s/it]
[ComfyUI] 46%|████▌     | 23/50 [00:52<01:02,  2.30s/it]
[ComfyUI] 48%|████▊     | 24/50 [00:54<00:59,  2.30s/it]
[ComfyUI] 50%|█████     | 25/50 [00:57<00:57,  2.30s/it]
[ComfyUI] 52%|█████▏    | 26/50 [00:59<00:55,  2.30s/it]
[ComfyUI] 54%|█████▍    | 27/50 [01:01<00:52,  2.30s/it]
[ComfyUI] 56%|█████▌    | 28/50 [01:04<00:50,  2.30s/it]
[ComfyUI] 58%|█████▊    | 29/50 [01:06<00:48,  2.30s/it]
[ComfyUI] 60%|██████    | 30/50 [01:08<00:46,  2.30s/it]
[ComfyUI] 62%|██████▏   | 31/50 [01:10<00:43,  2.30s/it]
[ComfyUI] 64%|██████▍   | 32/50 [01:13<00:41,  2.30s/it]
[ComfyUI] 66%|██████▌   | 33/50 [01:15<00:39,  2.30s/it]
[ComfyUI] 68%|██████▊   | 34/50 [01:17<00:36,  2.30s/it]
[ComfyUI] 70%|███████   | 35/50 [01:20<00:34,  2.30s/it]
[ComfyUI] 72%|███████▏  | 36/50 [01:22<00:32,  2.30s/it]
[ComfyUI] 74%|███████▍  | 37/50 [01:24<00:29,  2.30s/it]
[ComfyUI] 76%|███████▌  | 38/50 [01:27<00:27,  2.30s/it]
[ComfyUI] 78%|███████▊  | 39/50 [01:29<00:25,  2.30s/it]
[ComfyUI] 80%|████████  | 40/50 [01:31<00:23,  2.30s/it]
[ComfyUI] 82%|████████▏ | 41/50 [01:33<00:20,  2.30s/it]
[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18,  2.30s/it]
[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16,  2.30s/it]
[ComfyUI] 88%|████████▊ | 44/50 [01:40<00:13,  2.30s/it]
[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11,  2.31s/it]
[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09,  2.30s/it]
[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06,  2.30s/it]
[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04,  2.30s/it]
[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02,  2.30s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00,  2.30s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00,  2.29s/it]
[ComfyUI] Allocated memory: memory=12.760 GB
[ComfyUI] Max allocated memory: max_memory=15.559 GB
[ComfyUI] Max reserved memory: max_reserved=16.875 GB
[ComfyUI]
Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode
[ComfyUI] Decoding rows:   0%|          | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows:  50%|█████     | 1/2 [00:01<00:01,  1.52s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00,  1.27s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00,  1.31s/it]
[ComfyUI]
[ComfyUI] Blending tiles:   0%|          | 0/2 [00:00<?, ?it/s]
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.38it/s]
[ComfyUI]
[ComfyUI] Decoding rows:   0%|          | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows:  50%|█████     | 1/2 [00:00<00:00,  2.46it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00,  2.96it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00,  2.87it/s]
[ComfyUI]
[ComfyUI] Blending tiles:   0%|          | 0/2 [00:00<?, ?it/s]
Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 85.90it/s]
[ComfyUI] Prompt executed in 149.72 seconds
outputs:  {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}
====================================
HunyuanVideo_00001.png
HunyuanVideo_00001.mp4
Version Details
Version ID
e010979271e08c7c51cb50eba7efe208c175a7f0a6deeab736c341546eab1013
Version Created
January 23, 2025
Run on Replicate →