noahgsolomon/yumemono 🔢📝❓ → 🖼️
About
YUMEMONO

Example Output
Prompt:
"Yumemono style, pixelated anime girl with short light blue hair and white flower crown, pale skin, large bright blue eyes, gentle smile, wearing black and white maid outfit with frilly details and purple ribbon hair accessory, against a spring background with pink cherry blossom trees, clear blue sky, and distant buildings."
Output

Performance Metrics
17.15s
Prediction Time
17.16s
Total Time
All Input Parameters
{ "steps": 30, "prompt": "Yumemono style, pixelated anime girl with short light blue hair and white flower crown, pale skin, large bright blue eyes, gentle smile, wearing black and white maid outfit with frilly details and purple ribbon hair accessory, against a spring background with pink cherry blossom trees, clear blue sky, and distant buildings.", "denoise": 1, "guidance": 3.5, "aspect_ratio": "1:1 (Perfect Square)", "lora_strength": 1, "output_format": "webp", "output_quality": 95 }
Input Parameters
- seed
- Set a seed for reproducibility. Random by default.
- steps
- Number of sampling steps
- prompt
- Describe what you want to generate
- denoise
- Denoising strength
- guidance
- Flux guidance scale
- aspect_ratio
- Aspect ratio for the generated image
- lora_strength
- Strength of the Yumemono LoRA effect
- output_format
- Format of the output images
- output_quality
- Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
Output Schema
Output
Example Execution Logs
Random seed set to: 1320448579 Checking inputs ==================================== Checking weights ✅ flux1-dev.safetensors exists in ComfyUI/models/diffusion_models ✅ t5xxl_fp16.safetensors exists in ComfyUI/models/text_encoders ✅ clip_l.safetensors exists in ComfyUI/models/text_encoders ✅ ae.safetensors exists in ComfyUI/models/vae ==================================== Running workflow [ComfyUI] got prompt Executing node 8, title: Flux Resolution Calc, class type: FluxResolutionNode Executing node 6, title: Load VAE, class type: VAELoader [ComfyUI] Using pytorch attention in VAE [ComfyUI] Using pytorch attention in VAE [ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 Executing node 9, title: EmptySD3LatentImage, class type: EmptySD3LatentImage Executing node 5, title: DualCLIPLoader, class type: DualCLIPLoader [ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 [ComfyUI] clip missing: ['text_projection.weight'] Executing node 4, title: Load Diffusion Model, class type: UNETLoader [ComfyUI] model weight dtype torch.bfloat16, manual cast: None [ComfyUI] model_type FLUX Executing node 1, title: Load LoRA, class type: LoraLoaderFromURL [ComfyUI] Requested to load FluxClipModel_ Executing node 12, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode [ComfyUI] loaded completely 79408.425 9319.23095703125 True Executing node 11, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 10, title: FluxGuidance, class type: FluxGuidance Executing node 7, title: KSampler, class type: KSampler [ComfyUI] Requested to load Flux [ComfyUI] loaded completely 69975.06891708374 22700.134887695312 True [ComfyUI] ### Loading: ComfyUI-Impact-Pack (Subpack: V0.6) [ComfyUI] [WARN] ComfyUI-Impact-Pack: `ComfyUI` or `ComfyUI-Manager` is an outdated version. [ComfyUI] [Impact Pack] Wildcards loading done. [ComfyUI] [ComfyUI] [ComfyUI] [32mInitializing ControlAltAI Nodes[0m [ComfyUI] Creating huggingface_cache directory within comfy [ComfyUI] 0%| | 0/30 [00:00<?, ?it/s] [ComfyUI] 3%|▎ | 1/30 [00:00<00:07, 3.80it/s] [ComfyUI] 7%|▋ | 2/30 [00:00<00:08, 3.49it/s] [ComfyUI] 10%|█ | 3/30 [00:00<00:06, 3.94it/s] [ComfyUI] 13%|█▎ | 4/30 [00:00<00:06, 4.18it/s] [ComfyUI] 17%|█▋ | 5/30 [00:01<00:05, 4.33it/s] [ComfyUI] 20%|██ | 6/30 [00:01<00:05, 4.43it/s] [ComfyUI] 23%|██▎ | 7/30 [00:01<00:05, 4.49it/s] [ComfyUI] 27%|██▋ | 8/30 [00:01<00:04, 4.53it/s] [ComfyUI] 30%|███ | 9/30 [00:02<00:04, 4.55it/s] [ComfyUI] 33%|███▎ | 10/30 [00:02<00:04, 4.57it/s] [ComfyUI] 37%|███▋ | 11/30 [00:02<00:04, 4.59it/s] [ComfyUI] 40%|████ | 12/30 [00:02<00:03, 4.59it/s] [ComfyUI] 43%|████▎ | 13/30 [00:02<00:03, 4.60it/s] [ComfyUI] 47%|████▋ | 14/30 [00:03<00:03, 4.61it/s] [ComfyUI] 50%|█████ | 15/30 [00:03<00:03, 4.61it/s] [ComfyUI] 53%|█████▎ | 16/30 [00:03<00:03, 4.61it/s] [ComfyUI] 57%|█████▋ | 17/30 [00:03<00:02, 4.62it/s] [ComfyUI] 60%|██████ | 18/30 [00:04<00:02, 4.61it/s] [ComfyUI] 63%|██████▎ | 19/30 [00:04<00:02, 4.61it/s] [ComfyUI] 67%|██████▋ | 20/30 [00:04<00:02, 4.62it/s] [ComfyUI] 70%|███████ | 21/30 [00:04<00:01, 4.62it/s] [ComfyUI] 73%|███████▎ | 22/30 [00:04<00:01, 4.62it/s] [ComfyUI] 77%|███████▋ | 23/30 [00:05<00:01, 4.62it/s] [ComfyUI] 80%|████████ | 24/30 [00:05<00:01, 4.62it/s] [ComfyUI] 83%|████████▎ | 25/30 [00:05<00:01, 4.62it/s] [ComfyUI] 87%|████████▋ | 26/30 [00:05<00:00, 4.62it/s] [ComfyUI] 90%|█████████ | 27/30 [00:05<00:00, 4.62it/s] [ComfyUI] 93%|█████████▎| 28/30 [00:06<00:00, 4.62it/s] [ComfyUI] 97%|█████████▋| 29/30 [00:06<00:00, 4.49it/s] [ComfyUI] 100%|██████████| 30/30 [00:06<00:00, 4.53it/s] [ComfyUI] 100%|██████████| 30/30 [00:06<00:00, 4.51it/s] [ComfyUI] Requested to load AutoencodingEngine Executing node 13, title: VAE Decode, class type: VAEDecode [ComfyUI] loaded completely 46027.20537567139 159.87335777282715 True Executing node 14, title: Save Image, class type: SaveImage [ComfyUI] Prompt executed in 16.84 seconds outputs: {'14': {'images': [{'filename': 'output_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== output_00001_.png
Version Details
- Version ID
e4f38e7a01f7150fcd8264b9af728b6fd5f0ce41b487ada9f0ab85b9a49b323f
- Version Created
- March 26, 2025