deepfates/hunyuan-the-grand-budapest-hotel 🔢📝❓✓🖼️ → 🖼️

▶️ 323 runs 📅 Jan 2025 ⚙️ Cog 0.13.6
text-to-video

About

Hunyuan-Video model finetuned on The Grand Budapest Hotel (2014). Trigger word is "THGRN". Use "A video in the style of THGRN, THGRN" at the beginning of your prompt for best results.

Example Output

Prompt:

"A video in the style of THGRN, THGRN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene"

Output

Performance Metrics

154.33s Prediction Time
157.97s Total Time
All Input Parameters
{
  "crf": 19,
  "seed": 12345,
  "steps": 50,
  "width": 640,
  "height": 360,
  "prompt": "A video in the style of THGRN, THGRN The video clip features three individuals standing in a red elevator. The person on the left is wearing a purple uniform with gold buttons and a matching cap, standing with one hand on the elevator door. The person in the center is seated in a chair, wearing a light blue suit with a white shirt and a black bow tie. This individual has a mustache and is looking directly at the camera. The person on the right is also wearing a purple uniform with a cap that has the word BOBBY written on it. The background of the elevator is a vibrant red, creating a striking contrast with the purple uniforms. The overall scene\n",
  "lora_url": "",
  "scheduler": "DPMSolverMultistepScheduler",
  "flow_shift": 9,
  "frame_rate": 16,
  "num_frames": 66,
  "enhance_end": 1,
  "enhance_start": 0,
  "force_offload": true,
  "lora_strength": 1,
  "enhance_double": true,
  "enhance_single": true,
  "enhance_weight": 0.3,
  "guidance_scale": 6,
  "denoise_strength": 1
}
Input Parameters
crf Type: integerDefault: 19Range: 0 - 51
CRF (quality) for H264 encoding. Lower values = higher quality.
seed Type: integer
Set a seed for reproducibility. Random by default.
steps Type: integerDefault: 50Range: 1 - 150
Number of diffusion steps.
width Type: integerDefault: 640Range: 64 - 1536
Width for the generated video.
height Type: integerDefault: 360Range: 64 - 1024
Height for the generated video.
prompt Type: stringDefault:
The text prompt describing your video scene.
lora_url Type: stringDefault:
A URL pointing to your LoRA .safetensors file or a Hugging Face repo (e.g. 'user/repo' - uses the first .safetensors file).
scheduler Default: DPMSolverMultistepScheduler
Algorithm used to generate the video frames.
flow_shift Type: integerDefault: 9Range: 0 - 20
Video continuity factor (flow).
frame_rate Type: integerDefault: 16Range: 1 - 60
Video frame rate.
num_frames Type: integerDefault: 33Range: 1 - 1440
How many frames (duration) in the resulting video.
enhance_end Type: numberDefault: 1Range: 0 - 1
When to end enhancement in the video. Must be greater than enhance_start.
enhance_start Type: numberDefault: 0Range: 0 - 1
When to start enhancement in the video. Must be less than enhance_end.
force_offload Type: booleanDefault: true
Whether to force model layers offloaded to CPU.
lora_strength Type: numberDefault: 1Range: -10 - 10
Scale/strength for your LoRA.
enhance_double Type: booleanDefault: true
Apply enhancement across frame pairs.
enhance_single Type: booleanDefault: true
Apply enhancement to individual frames.
enhance_weight Type: numberDefault: 0.3Range: 0 - 2
Strength of the video enhancement effect.
guidance_scale Type: numberDefault: 6Range: 0 - 30
Overall influence of text vs. model.
denoise_strength Type: numberDefault: 1Range: 0 - 2
Controls how strongly noise is applied each step.
replicate_weights Type: string
A .tar file containing LoRA weights from replicate.
Output Schema

Output

Type: stringFormat: uri

Example Execution Logs
Seed set to: 12345
⚠️  Adjusted dimensions from 640x360 to 640x368 to satisfy model requirements
⚠️  Adjusted frame count from 66 to 65 to satisfy model requirements
�� USING REPLICATE WEIGHTS (preferred method)
🎯 USING REPLICATE WEIGHTS TAR FILE 🎯
----------------------------------------
📦 Processing replicate weights tar file...
🔄 Will rename LoRA to: replicate_c0abee79-3c9b-4898-bc1c-58ab2da7a01c.safetensors
📂 Extracting tar contents...
✅ Found lora_comfyui.safetensors in tar
✨ Successfully copied LoRA to: ComfyUI/models/loras/replicate_c0abee79-3c9b-4898-bc1c-58ab2da7a01c.safetensors
----------------------------------------
Checking inputs
====================================
Checking weights
✅ hunyuan_video_720_fp8_e4m3fn.safetensors exists in ComfyUI/models/diffusion_models
✅ hunyuan_video_vae_bf16.safetensors exists in ComfyUI/models/vae
====================================
Running workflow
[ComfyUI] got prompt
Executing node 7, title: HunyuanVideo VAE Loader, class type: HyVideoVAELoader
[ComfyUI] Loading text encoder model (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
Executing node 42, title: HunyuanVideo Enhance A Video, class type: HyVideoEnhanceAVideo
Executing node 16, title: (Down)Load HunyuanVideo TextEncoder, class type: DownloadAndLoadHyVideoTextEncoder
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (clipL) from: /src/ComfyUI/models/clip/clip-vit-large-patch14
[ComfyUI] Loading text encoder model (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
[ComfyUI]
[ComfyUI] Loading checkpoint shards:   0%|          | 0/4 [00:00<?, ?it/s]
[ComfyUI] Loading checkpoint shards:  25%|██▌       | 1/4 [00:00<00:01,  1.59it/s]
[ComfyUI] Loading checkpoint shards:  50%|█████     | 2/4 [00:01<00:01,  1.63it/s]
[ComfyUI] Loading checkpoint shards:  75%|███████▌  | 3/4 [00:01<00:00,  1.65it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00,  2.35it/s]
[ComfyUI] Loading checkpoint shards: 100%|██████████| 4/4 [00:01<00:00,  2.02it/s]
[ComfyUI] Text encoder to dtype: torch.float16
[ComfyUI] Loading tokenizer (llm) from: /src/ComfyUI/models/LLM/llava-llama-3-8b-text-encoder-tokenizer
Executing node 30, title: HunyuanVideo TextEncode, class type: HyVideoTextEncode
[ComfyUI] llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 141
[ComfyUI] clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 77
Executing node 41, title: HunyuanVideo Lora Select, class type: HyVideoLoraSelect
Executing node 1, title: HunyuanVideo Model Loader, class type: HyVideoModelLoader
[ComfyUI] model_type FLOW
[ComfyUI] The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Using accelerate to load and assign model weights to device...
[ComfyUI] Loading LoRA: replicate_c0abee79-3c9b-4898-bc1c-58ab2da7a01c with strength: 1.0
[ComfyUI] Requested to load HyVideoModel
[ComfyUI] loaded completely 9.5367431640625e+25 12555.953247070312 True
[ComfyUI] Input (height, width, video_length) = (368, 640, 65)
Executing node 3, title: HunyuanVideo Sampler, class type: HyVideoSampler
[ComfyUI] The config attributes {'reverse': True, 'solver': 'euler'} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
[ComfyUI] Sampling 65 frames in 17 latents at 640x368 with 50 inference steps
[ComfyUI] Scheduler config: FrozenDict([('num_train_timesteps', 1000), ('flow_shift', 9.0), ('reverse', True), ('solver', 'euler'), ('n_tokens', None), ('_use_default_values', ['n_tokens', 'num_train_timesteps'])])[ComfyUI]
[ComfyUI] 0%|          | 0/50 [00:00<?, ?it/s]
[ComfyUI] 2%|▏         | 1/50 [00:02<01:59,  2.44s/it]
[ComfyUI] 4%|▍         | 2/50 [00:04<01:39,  2.08s/it]
[ComfyUI] 6%|▌         | 3/50 [00:06<01:42,  2.18s/it]
[ComfyUI] 8%|▊         | 4/50 [00:08<01:42,  2.23s/it]
[ComfyUI] 10%|█         | 5/50 [00:11<01:41,  2.26s/it]
[ComfyUI] 12%|█▏        | 6/50 [00:13<01:40,  2.28s/it]
[ComfyUI] 14%|█▍        | 7/50 [00:15<01:38,  2.28s/it]
[ComfyUI] 16%|█▌        | 8/50 [00:18<01:36,  2.29s/it]
[ComfyUI] 18%|█▊        | 9/50 [00:20<01:33,  2.29s/it]
[ComfyUI] 20%|██        | 10/50 [00:22<01:31,  2.29s/it]
[ComfyUI] 22%|██▏       | 11/50 [00:24<01:29,  2.30s/it]
[ComfyUI] 24%|██▍       | 12/50 [00:27<01:27,  2.30s/it]
[ComfyUI] 26%|██▌       | 13/50 [00:29<01:24,  2.30s/it]
[ComfyUI] 28%|██▊       | 14/50 [00:31<01:22,  2.30s/it]
[ComfyUI] 30%|███       | 15/50 [00:34<01:20,  2.30s/it]
[ComfyUI] 32%|███▏      | 16/50 [00:36<01:18,  2.30s/it]
[ComfyUI] 34%|███▍      | 17/50 [00:38<01:15,  2.30s/it]
[ComfyUI] 36%|███▌      | 18/50 [00:41<01:13,  2.30s/it]
[ComfyUI] 38%|███▊      | 19/50 [00:43<01:11,  2.30s/it]
[ComfyUI] 40%|████      | 20/50 [00:45<01:08,  2.30s/it]
[ComfyUI] 42%|████▏     | 21/50 [00:47<01:06,  2.30s/it]
[ComfyUI] 44%|████▍     | 22/50 [00:50<01:04,  2.30s/it]
[ComfyUI] 46%|████▌     | 23/50 [00:52<01:02,  2.30s/it]
[ComfyUI] 48%|████▊     | 24/50 [00:54<00:59,  2.30s/it]
[ComfyUI] 50%|█████     | 25/50 [00:57<00:57,  2.30s/it]
[ComfyUI] 52%|█████▏    | 26/50 [00:59<00:55,  2.30s/it]
[ComfyUI] 54%|█████▍    | 27/50 [01:01<00:52,  2.30s/it]
[ComfyUI] 56%|█████▌    | 28/50 [01:04<00:50,  2.30s/it]
[ComfyUI] 58%|█████▊    | 29/50 [01:06<00:48,  2.30s/it]
[ComfyUI] 60%|██████    | 30/50 [01:08<00:46,  2.30s/it]
[ComfyUI] 62%|██████▏   | 31/50 [01:10<00:43,  2.30s/it]
[ComfyUI] 64%|██████▍   | 32/50 [01:13<00:41,  2.30s/it]
[ComfyUI] 66%|██████▌   | 33/50 [01:15<00:39,  2.30s/it]
[ComfyUI] 68%|██████▊   | 34/50 [01:17<00:36,  2.30s/it]
[ComfyUI] 70%|███████   | 35/50 [01:20<00:35,  2.36s/it]
[ComfyUI] 72%|███████▏  | 36/50 [01:22<00:32,  2.34s/it]
[ComfyUI] 74%|███████▍  | 37/50 [01:24<00:30,  2.33s/it]
[ComfyUI] 76%|███████▌  | 38/50 [01:27<00:27,  2.32s/it]
[ComfyUI] 78%|███████▊  | 39/50 [01:29<00:25,  2.32s/it]
[ComfyUI] 80%|████████  | 40/50 [01:31<00:23,  2.32s/it]
[ComfyUI] 82%|████████▏ | 41/50 [01:34<00:20,  2.31s/it]
[ComfyUI] 84%|████████▍ | 42/50 [01:36<00:18,  2.30s/it]
[ComfyUI] 86%|████████▌ | 43/50 [01:38<00:16,  2.30s/it]
[ComfyUI] 88%|████████▊ | 44/50 [01:41<00:13,  2.30s/it]
[ComfyUI] 90%|█████████ | 45/50 [01:43<00:11,  2.30s/it]
[ComfyUI] 92%|█████████▏| 46/50 [01:45<00:09,  2.29s/it]
[ComfyUI] 94%|█████████▍| 47/50 [01:47<00:06,  2.29s/it]
[ComfyUI] 96%|█████████▌| 48/50 [01:50<00:04,  2.29s/it]
[ComfyUI] 98%|█████████▊| 49/50 [01:52<00:02,  2.29s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00,  2.29s/it]
[ComfyUI] 100%|██████████| 50/50 [01:54<00:00,  2.30s/it]
[ComfyUI] Allocated memory: memory=12.760 GB
[ComfyUI] Max allocated memory: max_memory=15.559 GB
[ComfyUI] Max reserved memory: max_reserved=16.875 GB
Executing node 5, title: HunyuanVideo Decode, class type: HyVideoDecode
[ComfyUI]
[ComfyUI] Decoding rows:   0%|          | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows:  50%|█████     | 1/2 [00:01<00:01,  1.52s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00,  1.27s/it]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:02<00:00,  1.31s/it]
[ComfyUI]
[ComfyUI] Blending tiles:   0%|          | 0/2 [00:00<?, ?it/s]
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 28.94it/s]
[ComfyUI]
[ComfyUI] Decoding rows:   0%|          | 0/2 [00:00<?, ?it/s]
[ComfyUI] Decoding rows:  50%|█████     | 1/2 [00:00<00:00,  2.45it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00,  2.96it/s]
[ComfyUI] Decoding rows: 100%|██████████| 2/2 [00:00<00:00,  2.86it/s]
[ComfyUI]
[ComfyUI] Blending tiles:   0%|          | 0/2 [00:00<?, ?it/s]
Executing node 34, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine
[ComfyUI] Blending tiles: 100%|██████████| 2/2 [00:00<00:00, 86.39it/s]
[ComfyUI] Prompt executed in 148.40 seconds
outputs:  {'34': {'gifs': [{'filename': 'HunyuanVideo_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'HunyuanVideo_00001.png', 'fullpath': '/tmp/outputs/HunyuanVideo_00001.mp4'}]}}
====================================
HunyuanVideo_00001.png
HunyuanVideo_00001.mp4
Version Details
Version ID
92da6ced97eac105bc66dcf75c64ec5e11e11dced920d7708c396c3b429929b6
Version Created
January 23, 2025
Run on Replicate →