vetkastar/comfy-flux 📝🖼️❓🔢✓ → 🖼️
About
comfy with flux model,

Example Output
Output

Performance Metrics
36.85s
Prediction Time
203.91s
Total Time
All Input Parameters
{ "lora_urls": "", "output_format": "png", "workflow_json": "{\n \"6\": {\n \"inputs\": {\n \"text\": \"flowers with magic waves and circles\",\n \"clip\": [\n \"11\",\n 0\n ]\n },\n \"class_type\": \"CLIPTextEncode\",\n \"_meta\": {\n \"title\": \"CLIP Text Encode (Positive Prompt)\"\n }\n },\n \"8\": {\n \"inputs\": {\n \"samples\": [\n \"13\",\n 0\n ],\n \"vae\": [\n \"10\",\n 0\n ]\n },\n \"class_type\": \"VAEDecode\",\n \"_meta\": {\n \"title\": \"VAE Decode\"\n }\n },\n \"9\": {\n \"inputs\": {\n \"filename_prefix\": \"ComfyUI\",\n \"images\": [\n \"8\",\n 0\n ]\n },\n \"class_type\": \"SaveImage\",\n \"_meta\": {\n \"title\": \"Save Image\"\n }\n },\n \"10\": {\n \"inputs\": {\n \"vae_name\": \"ae.safetensors\"\n },\n \"class_type\": \"VAELoader\",\n \"_meta\": {\n \"title\": \"Load VAE\"\n }\n },\n \"11\": {\n \"inputs\": {\n \"clip_name1\": \"t5xxl_fp16.safetensors\",\n \"clip_name2\": \"clip_l.safetensors\",\n \"type\": \"flux\"\n },\n \"class_type\": \"DualCLIPLoader\",\n \"_meta\": {\n \"title\": \"DualCLIPLoader\"\n }\n },\n \"12\": {\n \"inputs\": {\n \"unet_name\": \"flux1-dev.safetensors\",\n \"weight_dtype\": \"default\"\n },\n \"class_type\": \"UNETLoader\",\n \"_meta\": {\n \"title\": \"Load Diffusion Model\"\n }\n },\n \"13\": {\n \"inputs\": {\n \"noise\": [\n \"25\",\n 0\n ],\n \"guider\": [\n \"22\",\n 0\n ],\n \"sampler\": [\n \"16\",\n 0\n ],\n \"sigmas\": [\n \"17\",\n 0\n ],\n \"latent_image\": [\n \"27\",\n 0\n ]\n },\n \"class_type\": \"SamplerCustomAdvanced\",\n \"_meta\": {\n \"title\": \"SamplerCustomAdvanced\"\n }\n },\n \"16\": {\n \"inputs\": {\n \"sampler_name\": \"euler\"\n },\n \"class_type\": \"KSamplerSelect\",\n \"_meta\": {\n \"title\": \"KSamplerSelect\"\n }\n },\n \"17\": {\n \"inputs\": {\n \"scheduler\": \"simple\",\n \"steps\": 20,\n \"denoise\": 1,\n \"model\": [\n \"30\",\n 0\n ]\n },\n \"class_type\": \"BasicScheduler\",\n \"_meta\": {\n \"title\": \"BasicScheduler\"\n }\n },\n \"22\": {\n \"inputs\": {\n \"model\": [\n \"30\",\n 0\n ],\n \"conditioning\": [\n \"26\",\n 0\n ]\n },\n \"class_type\": \"BasicGuider\",\n \"_meta\": {\n \"title\": \"BasicGuider\"\n }\n },\n \"25\": {\n \"inputs\": {\n \"noise_seed\": 219670278747233\n },\n \"class_type\": \"RandomNoise\",\n \"_meta\": {\n \"title\": \"RandomNoise\"\n }\n },\n \"26\": {\n \"inputs\": {\n \"guidance\": 3.5,\n \"conditioning\": [\n \"6\",\n 0\n ]\n },\n \"class_type\": \"FluxGuidance\",\n \"_meta\": {\n \"title\": \"FluxGuidance\"\n }\n },\n \"27\": {\n \"inputs\": {\n \"width\": 1024,\n \"height\": 1024,\n \"batch_size\": 1\n },\n \"class_type\": \"EmptySD3LatentImage\",\n \"_meta\": {\n \"title\": \"EmptySD3LatentImage\"\n }\n },\n \"30\": {\n \"inputs\": {\n \"max_shift\": 1.15,\n \"base_shift\": 0.5,\n \"width\": 1024,\n \"height\": 1024,\n \"model\": [\n \"12\",\n 0\n ]\n },\n \"class_type\": \"ModelSamplingFlux\",\n \"_meta\": {\n \"title\": \"ModelSamplingFlux\"\n }\n }\n}", "output_quality": 80, "randomise_seeds": true, "force_reset_cache": false, "return_temp_files": false }
Input Parameters
- lora_urls
- LoRA model URLs to download. Format: [path/]url. One URL per line.
- input_file
- Input image, tar or zip file
- output_format
- Format of the output images
- workflow_json
- Your ComfyUI workflow as JSON
- any_model_urls
- Any other model URLs to download. Format: path/url (path is required). One URL per line.
- output_quality
- Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
- checkpoint_urls
- Checkpoint model URLs to download. Format: [path/]url. One URL per line.
- controlnet_urls
- ControlNet model URLs to download. Format: [path/]url. One URL per line.
- randomise_seeds
- Automatically randomise seeds
- force_reset_cache
- Force reset the ComfyUI cache
- return_temp_files
- Return temporary files for debugging
Output Schema
Output
Example Execution Logs
Checking inputs ==================================== Checking weights ⏳ Downloading flux1-dev.safetensors to ComfyUI/models/diffusion_models ✅ flux1-dev.safetensors downloaded to ComfyUI/models/diffusion_models in 12.05s, size: 22700.25MB ⏳ Downloading clip_l.safetensors to ComfyUI/models/clip ✅ clip_l.safetensors downloaded to ComfyUI/models/clip in 0.23s, size: 234.74MB ⏳ Downloading t5xxl_fp16.safetensors to ComfyUI/models/clip ✅ t5xxl_fp16.safetensors downloaded to ComfyUI/models/clip in 5.28s, size: 9334.41MB ⏳ Downloading ae.safetensors to ComfyUI/models/vae ✅ ae.safetensors downloaded to ComfyUI/models/vae in 0.30s, size: 319.77MB ==================================== Randomising noise_seed to 2599278736 Running workflow got prompt Executing node 10, title: Load VAE, class type: VAELoader Using pytorch attention in VAE Using pytorch attention in VAE Executing node 25, title: RandomNoise, class type: RandomNoise Executing node 12, title: Load Diffusion Model, class type: UNETLoader model weight dtype torch.bfloat16, manual cast: None model_type FLUX Executing node 30, title: ModelSamplingFlux, class type: ModelSamplingFlux Executing node 11, title: DualCLIPLoader, class type: DualCLIPLoader clip missing: ['text_projection.weight'] Executing node 6, title: CLIP Text Encode (Positive Prompt), class type: CLIPTextEncode Requested to load FluxClipModel_ Loading 1 new model loaded completely 0.0 9319.23095703125 True Executing node 26, title: FluxGuidance, class type: FluxGuidance Executing node 22, title: BasicGuider, class type: BasicGuider Executing node 16, title: KSamplerSelect, class type: KSamplerSelect Executing node 17, title: BasicScheduler, class type: BasicScheduler Executing node 27, title: EmptySD3LatentImage, class type: EmptySD3LatentImage Requested to load Flux Loading 1 new model Executing node 13, title: SamplerCustomAdvanced, class type: SamplerCustomAdvanced loaded completely 0.0 22700.097778320312 True 0%| | 0/20 [00:00<?, ?it/s] 5%|▌ | 1/20 [00:00<00:06, 3.09it/s] 10%|█ | 2/20 [00:00<00:06, 2.64it/s] 15%|█▌ | 3/20 [00:01<00:06, 2.52it/s] 20%|██ | 4/20 [00:01<00:06, 2.46it/s] 25%|██▌ | 5/20 [00:01<00:06, 2.44it/s] 30%|███ | 6/20 [00:02<00:05, 2.42it/s] 35%|███▌ | 7/20 [00:02<00:05, 2.41it/s] 40%|████ | 8/20 [00:03<00:04, 2.40it/s] 45%|████▌ | 9/20 [00:03<00:04, 2.40it/s] 50%|█████ | 10/20 [00:04<00:04, 2.40it/s] 55%|█████▌ | 11/20 [00:04<00:03, 2.39it/s] 60%|██████ | 12/20 [00:04<00:03, 2.39it/s] 65%|██████▌ | 13/20 [00:05<00:02, 2.39it/s] 70%|███████ | 14/20 [00:05<00:02, 2.39it/s] 75%|███████▌ | 15/20 [00:06<00:02, 2.38it/s] 80%|████████ | 16/20 [00:06<00:01, 2.38it/s] 85%|████████▌ | 17/20 [00:07<00:01, 2.38it/s] 90%|█████████ | 18/20 [00:07<00:00, 2.38it/s] 95%|█████████▌| 19/20 [00:07<00:00, 2.38it/s] 100%|██████████| 20/20 [00:08<00:00, 2.38it/s] 100%|██████████| 20/20 [00:08<00:00, 2.41it/s] Requested to load AutoencodingEngine Loading 1 new model Executing node 8, title: VAE Decode, class type: VAEDecode loaded completely 0.0 159.87335777282715 True Executing node 9, title: Save Image, class type: SaveImage Prompt executed in 17.86 seconds outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png
Version Details
- Version ID
a5920d183df01581339458a699f35105cb7741451012c67f66e53a60b58b61ce
- Version Created
- December 17, 2024