littlemonsterzhang/wai90_sdxl 📝✓ → 🖼️

▶️ 15.4K runs 📅 Apr 2025 ⚙️ Cog 0.14.4
anime nsfw text-to-image

About

WAI-NSFW-illustrious-SDXL v.90

Example Output

Prompt:

"glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi (wuthering_waves), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer,
,masterpiece,best quality,amazing quality,"

Output

Example output

Performance Metrics

5.69s Prediction Time
109.45s Total Time
All Input Parameters
{
  "prompt": "glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\(wuthering_waves\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer,                                         \n,masterpiece,best quality,amazing quality,",
  "negative_prompt": "bad quality,worst quality,worst detail,sketch,censor,",
  "randomise_seeds": true
}
Input Parameters
prompt Type: stringDefault: A beautiful landscape with mountains and a lake
Text prompt for image generation
negative_prompt Type: stringDefault: blurry, bad quality, distorted
Negative prompt to specify what you don't want in the generated image
randomise_seeds Type: booleanDefault: true
Automatically randomise seeds (seed, noise_seed, rand_seed)
Output Schema

Output

Type: arrayItems Type: stringItems Format: uri

Example Execution Logs
【load_workflow path】 examples/api_workflows/sdxl_lora_work_api.json
【handle_known_unsupported_nodes】done
Checking inputs
====================================
【handle_inputs】done
Checking weights
【start check_weights if exists】 sdxl_vae.safetensors
check_if_file sdxl_vae.safetensors exists: models/vae
✅ sdxl_vae.safetensors exists in models/vae
【start check_weights if exists】 waiNSFWIllustrious_v90.safetensors
check_if_file waiNSFWIllustrious_v90.safetensors exists: models/checkpoints
✅ waiNSFWIllustrious_v90.safetensors exists in models/checkpoints
====================================
【handle_weights】done
Randomising seed to 3339211691
------ Running workflow ------
[ComfyUI] got prompt
------ Running prompt_id ------
Executing node 41, title: 加载VAE, class type: VAELoader
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Executing node 5, title: 空Latent图像, class type: EmptyLatentImage
Executing node 40, title: Checkpoint加载器(简易), class type: CheckpointLoaderSimple
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type EPS
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[ComfyUI] CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Executing node 7, title: CLIP文本编码, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
[ComfyUI] loaded completely 43939.05 1560.802734375 True
Executing node 42, title: 设置CLIP最后一层, class type: CLIPSetLastLayer
Executing node 6, title: CLIP文本编码, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
Executing node 3, title: K采样器, class type: KSampler
[ComfyUI] Requested to load SDXL
[ComfyUI] loaded completely 42272.12215499878 4897.0483474731445 True
[ComfyUI] 【item】: 9
[ComfyUI] 【inputs】: {'filename_prefix': 'ComfyUI', 'images': ['8', 0]}
[ComfyUI] 【class_type】: SaveImage
[ComfyUI]
[ComfyUI] 【obj_class】: <class 'nodes.SaveImage'>
[ComfyUI] 【class_inputs】: {'required': {'images': ('IMAGE', {'tooltip': 'The images to save.'}), 'filename_prefix': ('STRING', {'default': 'ComfyUI', 'tooltip': 'The prefix for the file to save. This may include formatting information such as %date:yyyy-MM-dd% or %Empty Latent Image.width% to include values from nodes.'})}, 'hidden': {'prompt': 'PROMPT', 'extra_pnginfo': 'EXTRA_PNGINFO'}}
[ComfyUI] 【valid_inputs】: {'filename_prefix', 'images'}
[ComfyUI] 【item】: 8
[ComfyUI] 【inputs】: {'samples': ['3', 0], 'vae': ['41', 0]}
[ComfyUI] 【class_type】: VAEDecode
[ComfyUI] 【obj_class】: <class 'nodes.VAEDecode'>
[ComfyUI] 【class_inputs】: {'required': {'samples': ('LATENT', {'tooltip': 'The latent to be decoded.'}), 'vae': ('VAE', {'tooltip': 'The VAE model used for decoding the latent.'})}}
[ComfyUI] 【valid_inputs】: {'samples', 'vae'}
[ComfyUI] 【item】: 3
[ComfyUI] 【inputs】: {'seed': 3339211691, 'steps': 26, 'cfg': 7, 'sampler_name': 'euler', 'scheduler': 'exponential', 'denoise': 1, 'model': ['40', 0], 'positive': ['6', 0], 'negative': ['7', 0], 'latent_image': ['5', 0]}
[ComfyUI] 【class_type】: KSampler
[ComfyUI] 【obj_class】: <class 'nodes.KSampler'>
[ComfyUI] 【class_inputs】: {'required': {'model': ('MODEL', {'tooltip': 'The model used for denoising the input latent.'}), 'seed': ('INT', {'default': 0, 'min': 0, 'max': 18446744073709551615, 'control_after_generate': True, 'tooltip': 'The random seed used for creating the noise.'}), 'steps': ('INT', {'default': 20, 'min': 1, 'max': 10000, 'tooltip': 'The number of steps used in the denoising process.'}), 'cfg': ('FLOAT', {'default': 8.0, 'min': 0.0, 'max': 100.0, 'step': 0.1, 'round': 0.01, 'tooltip': 'The Classifier-Free Guidance scale balances creativity and adherence to the prompt. Higher values result in images more closely matching the prompt however too high values will negatively impact quality.'}), 'sampler_name': (['euler', 'euler_cfg_pp', 'euler_ancestral', 'euler_ancestral_cfg_pp', 'heun', 'heunpp2', 'dpm_2', 'dpm_2_ancestral', 'lms', 'dpm_fast', 'dpm_adaptive', 'dpmpp_2s_ancestral', 'dpmpp_2s_ancestral_cfg_pp', 'dpmpp_sde', 'dpmpp_sde_gpu', 'dpmpp_2m', 'dpmpp_2m_cfg_pp', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu', 'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu', 'ddpm', 'lcm', 'ipndm', 'ipndm_v', 'deis', 'res_multistep', 'res_multistep_cfg_pp', 'res_multistep_ancestral', 'res_multistep_ancestral_cfg_pp', 'gradient_estimation', 'er_sde', 'ddim', 'uni_pc', 'uni_pc_bh2'], {'tooltip': 'The algorithm used when sampling, this can affect the quality, speed, and style of the generated output.'}), 'scheduler': (['normal', 'karras', 'exponential', 'sgm_uniform', 'simple', 'ddim_uniform', 'beta', 'linear_quadratic', 'kl_optimal'], {'tooltip': 'The scheduler controls how noise is gradually removed to form the image.'}), 'positive': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to include in the image.'}), 'negative': ('CONDITIONING', {'tooltip': 'The conditioning describing the attributes you want to exclude from the image.'}), 'latent_image': ('LATENT', {'tooltip': 'The latent image to denoise.'}), 'denoise': ('FLOAT', {'default': 1.0, 'min': 0.0, 'max': 1.0, 'step': 0.01, 'tooltip': 'The amount of denoising applied, lower values will maintain the structure of the initial image allowing for image to image sampling.'})}}
[ComfyUI] 【valid_inputs】: {'model', 'seed', 'cfg', 'scheduler', 'latent_image', 'steps', 'negative', 'denoise', 'positive', 'sampler_name'}
[ComfyUI] 【item】: 40
[ComfyUI] 【inputs】: {'ckpt_name': 'waiNSFWIllustrious_v90.safetensors'}
[ComfyUI] 【class_type】: CheckpointLoaderSimple
[ComfyUI] 【obj_class】: <class 'nodes.CheckpointLoaderSimple'>
[ComfyUI] 【class_inputs】: {'required': {'ckpt_name': (['waiNSFWIllustrious_v90.safetensors'], {'tooltip': 'The name of the checkpoint (model) to load.'})}}
[ComfyUI] 【valid_inputs】: {'ckpt_name'}
[ComfyUI] 【item】: 5
[ComfyUI] 【inputs】: {'width': 768, 'height': 1280, 'batch_size': 1}
[ComfyUI] 【class_type】: EmptyLatentImage
[ComfyUI] 【obj_class】: <class 'nodes.EmptyLatentImage'>
[ComfyUI] 【class_inputs】: {'required': {'width': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The width of the latent images in pixels.'}), 'height': ('INT', {'default': 512, 'min': 16, 'max': 16384, 'step': 8, 'tooltip': 'The height of the latent images in pixels.'}), 'batch_size': ('INT', {'default': 1, 'min': 1, 'max': 4096, 'tooltip': 'The number of latent images in the batch.'})}}
[ComfyUI] 【valid_inputs】: {'height', 'batch_size', 'width'}
[ComfyUI] 【item】: 7
[ComfyUI] 【inputs】: {'text': 'bad quality,worst quality,worst detail,sketch,censor,', 'clip': ['40', 1]}
[ComfyUI] 【class_type】: CLIPTextEncode
[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>
[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}
[ComfyUI] 【valid_inputs】: {'clip', 'text'}
[ComfyUI] 【item】: 40
[ComfyUI] 【item】: 6
[ComfyUI] 【inputs】: {'text': 'glowing eyes, streaked hair, glowing inner hair, glowing streaks, 1girl, jinhsi \\(wuthering_waves\\), white hair,blonde_eyes, large_breasts, (upper_body,close-up:1.4),Dynamic pose,串,((aqua and yellow:0.85) but Limited palette:1.2),//,Chinese architecture,Ink,hair flowers,bamboo forest,bamboo,//(long hair),small breasts,smile,((dragon)),Yellow lightning,(makeup,white eyeliner),white eyeshadow,white eyes,(long eyelashes),half-closed eyes,Dragon skirt,blush,holding sword,chinese sword,(dynamic angle:1.2),(hanfu:1.4),chinese clothes,transparent clothes,tassel,chinese knot,bare shoulders,kanzashi,draped silk,gold trim,wind,bokeh,scattered leaves,flying splashes,waterfall,splashed water,looking at viewer,                                         \n,masterpiece,best quality,amazing quality,', 'clip': ['42', 0]}
[ComfyUI] 【class_type】: CLIPTextEncode
[ComfyUI] 【obj_class】: <class 'nodes.CLIPTextEncode'>
[ComfyUI] 【class_inputs】: {'required': {'text': (<IO.STRING: 'STRING'>, {'multiline': True, 'dynamicPrompts': True, 'tooltip': 'The text to be encoded.'}), 'clip': (<IO.CLIP: 'CLIP'>, {'tooltip': 'The CLIP model used for encoding the text.'})}}
[ComfyUI] 【valid_inputs】: {'clip', 'text'}
[ComfyUI] 【item】: 42
[ComfyUI] 【inputs】: {'stop_at_clip_layer': -2, 'clip': ['40', 1]}
[ComfyUI] 【class_type】: CLIPSetLastLayer
[ComfyUI] 【obj_class】: <class 'nodes.CLIPSetLastLayer'>
[ComfyUI] 【class_inputs】: {'required': {'clip': ('CLIP',), 'stop_at_clip_layer': ('INT', {'default': -1, 'min': -24, 'max': -1, 'step': 1})}}
[ComfyUI] 【valid_inputs】: {'clip', 'stop_at_clip_layer'}
[ComfyUI] 【item】: 40
[ComfyUI] 【item】: 41
[ComfyUI] 【inputs】: {'vae_name': 'sdxl_vae.safetensors'}
[ComfyUI] 【class_type】: VAELoader
[ComfyUI] 【obj_class】: <class 'nodes.VAELoader'>
[ComfyUI] 【class_inputs】: {'required': {'vae_name': (['sdxl_vae.safetensors'],)}}
[ComfyUI] 【valid_inputs】: {'vae_name'}
[ComfyUI] 0%|          | 0/26 [00:00<?, ?it/s]
[ComfyUI] 4%|▍         | 1/26 [00:00<00:07,  3.36it/s]
[ComfyUI] 12%|█▏        | 3/26 [00:00<00:03,  7.28it/s]
[ComfyUI] 15%|█▌        | 4/26 [00:00<00:02,  7.86it/s]
[ComfyUI] 19%|█▉        | 5/26 [00:00<00:02,  8.27it/s]
[ComfyUI] 23%|██▎       | 6/26 [00:00<00:02,  8.56it/s]
[ComfyUI] 27%|██▋       | 7/26 [00:00<00:02,  8.76it/s]
[ComfyUI] 31%|███       | 8/26 [00:01<00:02,  8.90it/s]
[ComfyUI] 35%|███▍      | 9/26 [00:01<00:01,  9.00it/s]
[ComfyUI] 38%|███▊      | 10/26 [00:01<00:01,  9.06it/s]
[ComfyUI] 42%|████▏     | 11/26 [00:01<00:01,  9.11it/s]
[ComfyUI] 46%|████▌     | 12/26 [00:01<00:01,  9.10it/s]
[ComfyUI] 50%|█████     | 13/26 [00:01<00:01,  9.13it/s]
[ComfyUI] 54%|█████▍    | 14/26 [00:01<00:01,  9.15it/s]
[ComfyUI] 58%|█████▊    | 15/26 [00:01<00:01,  9.15it/s]
[ComfyUI] 62%|██████▏   | 16/26 [00:01<00:01,  9.15it/s]
[ComfyUI] 65%|██████▌   | 17/26 [00:01<00:00,  9.18it/s]
[ComfyUI] 69%|██████▉   | 18/26 [00:02<00:00,  9.17it/s]
[ComfyUI] 73%|███████▎  | 19/26 [00:02<00:00,  9.18it/s]
[ComfyUI] 77%|███████▋  | 20/26 [00:02<00:00,  9.19it/s]
[ComfyUI] 81%|████████  | 21/26 [00:02<00:00,  9.21it/s]
[ComfyUI] 85%|████████▍ | 22/26 [00:02<00:00,  9.21it/s]
[ComfyUI] 88%|████████▊ | 23/26 [00:02<00:00,  9.19it/s]
[ComfyUI] 92%|█████████▏| 24/26 [00:02<00:00,  9.18it/s]
[ComfyUI] 96%|█████████▌| 25/26 [00:02<00:00,  9.19it/s]
[ComfyUI] 100%|██████████| 26/26 [00:02<00:00,  9.19it/s]
[ComfyUI] 100%|██████████| 26/26 [00:02<00:00,  8.78it/s]
[ComfyUI] Requested to load AutoencoderKL
Executing node 8, title: VAE解码, class type: VAEDecode
[ComfyUI] loaded completely 34094.475997924805 159.55708122253418 True
Executing node 9, title: 保存图像, class type: SaveImage
[ComfyUI] Prompt executed in 5.47 seconds
outputs:  {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
ComfyUI_00001_.png
Version Details
Version ID
820ce2c86370ccfac38e9126bcffc58d23348a0ab06179c4b2f49c444ef2d0a6
Version Created
April 15, 2025
Run on Replicate →