shridharathi/glass-face-vid 🔢🖼️❓📝 → 🖼️
About

Example Output
Prompt:
"GLASS style, glass face of a girl with rainbow prism glass lighting"
Output
Performance Metrics
167.81s
Prediction Time
167.82s
Total Time
All Input Parameters
{ "frames": 81, "prompt": "GLASS style, glass face of a girl with rainbow prism glass lighting", "fast_mode": "Balanced", "resolution": "480p", "aspect_ratio": "16:9", "sample_shift": 8, "sample_steps": 30, "negative_prompt": "", "lora_strength_clip": 1, "sample_guide_scale": 5, "lora_strength_model": 1 }
Input Parameters
- seed
- Set a seed for reproducibility. Random by default.
- image
- Image to use as a starting frame for image to video generation.
- frames
- The number of frames to generate (1 to 5 seconds)
- prompt (required)
- Text prompt for video generation
- fast_mode
- Speed up generation with different levels of acceleration. Faster modes may degrade quality somewhat. The speedup is dependent on the content, so different videos may see different speedups.
- resolution
- The resolution of the video. 720p is not supported for 1.3b.
- aspect_ratio
- The aspect ratio of the video. 16:9, 9:16, 1:1, etc.
- sample_shift
- Sample shift factor
- sample_steps
- Number of generation steps. Fewer steps means faster generation, at the expensive of output quality. 30 steps is sufficient for most prompts
- negative_prompt
- Things you do not want to see in your video
- replicate_weights
- Replicate LoRA weights to use. Leave blank to use the default weights.
- lora_strength_clip
- Strength of the LORA applied to the CLIP model. 0.0 is no LORA.
- sample_guide_scale
- Higher guide scale makes prompt adherence better, but can reduce variation
- lora_strength_model
- Strength of the LORA applied to the model. 0.0 is no LORA.
Output Schema
Output
Example Execution Logs
Random seed set to: 2425086395 ✅ 14b_d250a7c3d7cb8a92a384093a122a8339.safetensors already cached Checking inputs ==================================== Checking weights ✅ 14b_d250a7c3d7cb8a92a384093a122a8339.safetensors exists in loras directory ✅ wan_2.1_vae.safetensors exists in ComfyUI/models/vae ✅ wan2.1_t2v_14B_bf16.safetensors exists in ComfyUI/models/diffusion_models ✅ umt5_xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ==================================== Running workflow [ComfyUI] got prompt Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Executing node 3, title: KSampler, class type: KSampler [ComfyUI] [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] Resetting TeaCache state [ComfyUI] [ComfyUI] 3%|▎ | 1/30 [00:06<03:11, 6.62s/it] [ComfyUI] 7%|▋ | 2/30 [00:16<03:51, 8.26s/it] [ComfyUI] 10%|█ | 3/30 [00:25<03:58, 8.82s/it] [ComfyUI] TeaCache: Initialized [ComfyUI] [ComfyUI] 13%|█▎ | 4/30 [00:37<04:26, 10.27s/it] [ComfyUI] 20%|██ | 6/30 [00:47<02:57, 7.41s/it] [ComfyUI] 27%|██▋ | 8/30 [00:57<02:19, 6.32s/it] [ComfyUI] 33%|███▎ | 10/30 [01:07<01:55, 5.77s/it] [ComfyUI] 40%|████ | 12/30 [01:17<01:38, 5.46s/it] [ComfyUI] 47%|████▋ | 14/30 [01:26<01:24, 5.27s/it] [ComfyUI] 53%|█████▎ | 16/30 [01:36<01:11, 5.14s/it] [ComfyUI] 60%|██████ | 18/30 [01:46<01:00, 5.06s/it] [ComfyUI] 67%|██████▋ | 20/30 [01:56<00:50, 5.01s/it] [ComfyUI] 73%|███████▎ | 22/30 [02:05<00:39, 4.97s/it] [ComfyUI] 80%|████████ | 24/30 [02:15<00:29, 4.94s/it] [ComfyUI] 87%|████████▋ | 26/30 [02:25<00:19, 4.92s/it] [ComfyUI] 93%|█████████▎| 28/30 [02:35<00:09, 4.91s/it] [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 4.90s/it] Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 50, title: Video Combine 🎥🅥🅗🅢, class type: VHS_VideoCombine [ComfyUI] 100%|██████████| 30/30 [02:44<00:00, 5.50s/it] [ComfyUI] Prompt executed in 167.65 seconds outputs: {'50': {'gifs': [{'filename': 'R8_Wan_00001.mp4', 'subfolder': '', 'type': 'output', 'format': 'video/h264-mp4', 'frame_rate': 16.0, 'workflow': 'R8_Wan_00001.png', 'fullpath': '/tmp/outputs/R8_Wan_00001.mp4'}]}} ==================================== R8_Wan_00001.png R8_Wan_00001.mp4
Version Details
- Version ID
6d6011bef3767067cb5eb16237b80944c7910ae9145d58bbdc87f9fbbc269c69
- Version Created
- March 28, 2025