kelvincai522/any-comfyui-workflow 🖼️❓📝🔢✓ → 🖼️
About
ComfyUI API
Example Output
Output
Performance Metrics
4.66s
Prediction Time
33.33s
Total Time
All Input Parameters
{
"output_format": "webp",
"workflow_json": "{\n \"1\": {\n \"inputs\": {\n \"ckpt_name\": \"prefectIllustriousXL_v10.safetensors\"\n },\n \"class_type\": \"CheckpointLoaderSimple\",\n \"_meta\": {\n \"title\": \"Load Checkpoint\"\n }\n },\n \"2\": {\n \"inputs\": {\n \"seed\": 72,\n \"steps\": 24,\n \"cfg\": 5.5,\n \"sampler_name\": \"dpmpp_2m\",\n \"scheduler\": \"karras\",\n \"denoise\": 1,\n \"model\": [\n \"12\",\n 0\n ],\n \"positive\": [\n \"4\",\n 0\n ],\n \"negative\": [\n \"7\",\n 0\n ],\n \"latent_image\": [\n \"3\",\n 0\n ]\n },\n \"class_type\": \"KSampler\",\n \"_meta\": {\n \"title\": \"KSampler\"\n }\n },\n \"3\": {\n \"inputs\": {\n \"width\": 1024,\n \"height\": 1024,\n \"batch_size\": 1\n },\n \"class_type\": \"EmptyLatentImage\",\n \"_meta\": {\n \"title\": \"Empty Latent Image\"\n }\n },\n \"4\": {\n \"inputs\": {\n \"text\": \"masterpiece,best quality,amazing quality,absurdres, retro_artstyle, flat_colors,\\n1girl, green_eyes, brown_hair, ponytail, large_breasts, wedding dress\",\n \"clip\": [\n \"8\",\n 1\n ]\n },\n \"class_type\": \"CLIPTextEncode\",\n \"_meta\": {\n \"title\": \"CLIP Text Encode (Prompt)\"\n }\n },\n \"5\": {\n \"inputs\": {\n \"samples\": [\n \"2\",\n 0\n ],\n \"vae\": [\n \"1\",\n 2\n ]\n },\n \"class_type\": \"VAEDecode\",\n \"_meta\": {\n \"title\": \"VAE Decode\"\n }\n },\n \"6\": {\n \"inputs\": {\n \"filename_prefix\": \"ComfyUI\",\n \"images\": [\n \"5\",\n 0\n ]\n },\n \"class_type\": \"SaveImage\",\n \"_meta\": {\n \"title\": \"Save Image\"\n }\n },\n \"7\": {\n \"inputs\": {\n \"text\": \"bad quality,worst quality,worst detail,sketch,censored, artist name, signature, watermark, logo, badge teeth, shading, (3d), (text)\",\n \"clip\": [\n \"8\",\n 1\n ]\n },\n \"class_type\": \"CLIPTextEncode\",\n \"_meta\": {\n \"title\": \"CLIP Text Encode (Prompt)\"\n }\n },\n \"8\": {\n \"inputs\": {\n \"lora_name\": \"bb-style-illust.safetensors\",\n \"strength_model\": 0.7000000000000001,\n \"strength_clip\": 0.7000000000000001,\n \"model\": [\n \"1\",\n 0\n ],\n \"clip\": [\n \"1\",\n 1\n ]\n },\n \"class_type\": \"LoraLoader\",\n \"_meta\": {\n \"title\": \"Load LoRA\"\n }\n },\n \"12\": {\n \"inputs\": {\n \"object_to_patch\": \"diffusion_model\",\n \"residual_diff_threshold\": 0.2,\n \"start\": 0,\n \"end\": 1,\n \"max_consecutive_cache_hits\": -1,\n \"model\": [\n \"8\",\n 0\n ]\n },\n \"class_type\": \"ApplyFBCacheOnModel\",\n \"_meta\": {\n \"title\": \"Apply First Block Cache\"\n }\n }\n}",
"output_quality": 95,
"randomise_seeds": true,
"force_reset_cache": false,
"return_temp_files": false
}
Input Parameters
- input_file
- Input image, video, tar or zip file. Read guidance on workflows and input files here: https://github.com/replicate/cog-comfyui. Alternatively, you can replace inputs with URLs in your JSON workflow and the model will download them.
- output_format
- Format of the output images
- workflow_json
- Your ComfyUI workflow as JSON string or URL. You must use the API version of your workflow. Get it from ComfyUI using 'Save (API format)'. Instructions here: https://github.com/replicate/cog-comfyui
- output_quality
- Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
- randomise_seeds
- Automatically randomise seeds (seed, noise_seed, rand_seed)
- force_reset_cache
- Force reset the ComfyUI cache before running the workflow. Useful for debugging.
- return_temp_files
- Return any temporary files, such as preprocessed controlnet images. Useful for debugging.
Output Schema
Output
Example Execution Logs
Checking inputs
====================================
Checking weights
✅ bb-style-illust.safetensors exists in ComfyUI/models/loras
✅ prefectIllustriousXL_v10.safetensors exists in ComfyUI/models/checkpoints
====================================
Randomising seed to 3505824806
Running workflow
[ComfyUI] got prompt
Executing node 1, title: Load Checkpoint, class type: CheckpointLoaderSimple
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type EPS
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Executing node 3, title: Empty Latent Image, class type: EmptyLatentImage
Executing node 8, title: Load LoRA, class type: LoraLoader
Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
[ComfyUI] loaded completely 43939.05 1560.802734375 True
Executing node 4, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode
Executing node 12, title: Apply First Block Cache, class type: ApplyFBCacheOnModel
Executing node 2, title: KSampler, class type: KSampler
[ComfyUI] Requested to load SDXL
[ComfyUI] loaded completely 42272.12216262818 4897.0483474731445 True
[ComfyUI] ColorMod: Can't find pypng! Please install to enable 16bit image support.
[ComfyUI] ColorMod: Ignoring node 'CV2TonemapDurand' due to cv2 edition/version
[ComfyUI] ------------------------------------------
[ComfyUI] [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m
[ComfyUI] ------------------------------------------[ComfyUI]
[ComfyUI] ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
[ComfyUI] ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
[ComfyUI] ------------------------------------------
[ComfyUI] Making the "web\extensions\FizzleDorf" folder
[ComfyUI] Update to javascripts files detected
[ComfyUI] Copying Folder here to satisfy init, eventually I'll have stuff in here..txt to extensions folder
[ComfyUI] [34mFizzleDorf Custom Nodes: [92mLoaded[0m
[ComfyUI] Please 'pip install xformers'
[ComfyUI] Nvidia APEX normalization not installed, using PyTorch LayerNorm
[ComfyUI] [92m[tinyterraNodes] [32mLoaded[0m
[ComfyUI] Please 'pip install xformers'
[ComfyUI] Nvidia APEX normalization not installed, using PyTorch LayerNorm
[ComfyUI]
[ComfyUI] [36mEfficiency Nodes:[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...[92mSuccess![0m
[ComfyUI] [93mEfficiency Nodes Warning:[0m Failed to import python package 'simpleeval'; related nodes disabled.
[ComfyUI]
[ComfyUI] writing new user config.
[ComfyUI]
[ComfyUI] [92m[rgthree-comfy] Loaded 42 fantastic nodes. 🎉[00m
[ComfyUI]
[ComfyUI] [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m
[ComfyUI] [34mWAS Node Suite: [0m`ffmpeg_bin_path` is set to: /usr/bin/ffmpeg[0m
[ComfyUI] [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m220[0m [32mnodes successfully.[0m
[ComfyUI] 0%| | 0/24 [00:00<?, ?it/s]
[ComfyUI] 4%|▍ | 1/24 [00:00<00:07, 3.03it/s]
[ComfyUI] 25%|██▌ | 6/24 [00:00<00:01, 15.57it/s]
[ComfyUI] 38%|███▊ | 9/24 [00:00<00:00, 18.26it/s]
[ComfyUI] 50%|█████ | 12/24 [00:00<00:00, 19.87it/s]
[ComfyUI] 62%|██████▎ | 15/24 [00:00<00:00, 16.27it/s]
[ComfyUI] 71%|███████ | 17/24 [00:01<00:00, 16.18it/s]
[ComfyUI] 79%|███████▉ | 19/24 [00:01<00:00, 16.08it/s]
[ComfyUI] 88%|████████▊ | 21/24 [00:01<00:00, 16.03it/s]
[ComfyUI] 100%|██████████| 24/24 [00:01<00:00, 17.71it/s]
Executing node 5, title: VAE Decode, class type: VAEDecode
[ComfyUI] Requested to load AutoencoderKL
[ComfyUI] loaded completely 33828.225898742676 159.55708122253418 True
Executing node 6, title: Save Image, class type: SaveImage
[ComfyUI] Prompt executed in 4.23 seconds
outputs: {'6': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
ComfyUI_00001_.png
Version Details
- Version ID
6a0f39c04f51378f3d3d3c83d89695ab4b60ea78e64cf07fd68567b3716cdcda- Version Created
- July 2, 2025