janel0421/lionday0421 🖼️🔢❓📝✓ → 🖼️
About
Example Output
"
A hyperrealistic wide-angle night scene in Paris featuring 'lionday0421' (Dairennys), a 6-year-old girl with dark skin and a slim build, standing confidently near the illuminated Eiffel Tower. She has almond-shaped eyes with a calm, curious expression and textured curly hair styled into two neat braids adorned with small white beads near the tips.
For this photoshoot, 'lionday0421' is wearing a charming, age-appropriate dress: a pastel pink outfit with delicate lace accents and a flowing, knee-length skirt. The dress is paired with matching ballerina flats and a subtle floral hair accessory, enhancing her youthful elegance.
The wide-angle composition captures the vibrant Parisian night. The Eiffel Tower shines brightly, casting a warm golden glow over the cobblestone streets and Seine River. The scene includes scattered tourists admiring the view, soft streetlights adding atmosphere, and a clear starry sky. The focus remains on 'lionday0421', framed beautifully in this magical and heartwarming moment, blending her grace with the enchanting backdrop of Paris.
"Output



Performance Metrics
All Input Parameters
{
"model": "dev",
"height": 1440,
"prompt": "A hyperrealistic wide-angle night scene in Paris featuring 'lionday0421' (Dairennys), a 6-year-old girl with dark skin and a slim build, standing confidently near the illuminated Eiffel Tower. She has almond-shaped eyes with a calm, curious expression and textured curly hair styled into two neat braids adorned with small white beads near the tips.\n\nFor this photoshoot, 'lionday0421' is wearing a charming, age-appropriate dress: a pastel pink outfit with delicate lace accents and a flowing, knee-length skirt. The dress is paired with matching ballerina flats and a subtle floral hair accessory, enhancing her youthful elegance.\n\nThe wide-angle composition captures the vibrant Parisian night. The Eiffel Tower shines brightly, casting a warm golden glow over the cobblestone streets and Seine River. The scene includes scattered tourists admiring the view, soft streetlights adding atmosphere, and a clear starry sky. The focus remains on 'lionday0421', framed beautifully in this magical and heartwarming moment, blending her grace with the enchanting backdrop of Paris.",
"go_fast": false,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 4,
"aspect_ratio": "9:16",
"output_format": "jpg",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
2025-01-15 00:41:31.323 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-15 00:41:31.323 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 93%|█████████▎| 283/304 [00:00<00:00, 2824.71it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2728.58it/s] 2025-01-15 00:41:31.435 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.11s 2025-01-15 00:41:31.436 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/6c9893efceca0c8c 2025-01-15 00:41:31.552 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2025-01-15 00:41:31.552 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-15 00:41:31.552 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 93%|█████████▎| 284/304 [00:00<00:00, 2790.77it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2733.70it/s] 2025-01-15 00:41:31.664 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.23s Using seed: 7005 0it [00:00, ?it/s] 1it [00:00, 8.45it/s] 2it [00:00, 5.90it/s] 3it [00:00, 5.37it/s] 4it [00:00, 5.16it/s] 5it [00:00, 5.01it/s] 6it [00:01, 4.93it/s] 7it [00:01, 4.90it/s] 8it [00:01, 4.89it/s] 9it [00:01, 4.88it/s] 10it [00:01, 4.86it/s] 11it [00:02, 4.84it/s] 12it [00:02, 4.84it/s] 13it [00:02, 4.84it/s] 14it [00:02, 4.84it/s] 15it [00:03, 4.83it/s] 16it [00:03, 4.83it/s] 17it [00:03, 4.83it/s] 18it [00:03, 4.83it/s] 19it [00:03, 4.83it/s] 20it [00:04, 4.83it/s] 21it [00:04, 4.83it/s] 22it [00:04, 4.83it/s] 23it [00:04, 4.83it/s] 24it [00:04, 4.83it/s] 25it [00:05, 4.83it/s] 26it [00:05, 4.83it/s] 27it [00:05, 4.83it/s] 28it [00:05, 4.83it/s] 28it [00:05, 4.91it/s] 0it [00:00, ?it/s] 1it [00:00, 4.85it/s] 2it [00:00, 4.83it/s] 3it [00:00, 4.83it/s] 4it [00:00, 4.83it/s] 5it [00:01, 4.83it/s] 6it [00:01, 4.84it/s] 7it [00:01, 4.82it/s] 8it [00:01, 4.82it/s] 9it [00:01, 4.82it/s] 10it [00:02, 4.82it/s] 11it [00:02, 4.82it/s] 12it [00:02, 4.82it/s] 13it [00:02, 4.82it/s] 14it [00:02, 4.82it/s] 15it [00:03, 4.82it/s] 16it [00:03, 4.83it/s] 17it [00:03, 4.83it/s] 18it [00:03, 4.83it/s] 19it [00:03, 4.83it/s] 20it [00:04, 4.84it/s] 21it [00:04, 4.82it/s] 22it [00:04, 4.82it/s] 23it [00:04, 4.82it/s] 24it [00:04, 4.83it/s] 25it [00:05, 4.83it/s] 26it [00:05, 4.82it/s] 27it [00:05, 4.81it/s] 28it [00:05, 4.81it/s] 28it [00:05, 4.82it/s] 0it [00:00, ?it/s] 1it [00:00, 4.88it/s] 2it [00:00, 4.87it/s] 3it [00:00, 4.85it/s] 4it [00:00, 4.84it/s] 5it [00:01, 4.83it/s] 6it [00:01, 4.82it/s] 7it [00:01, 4.82it/s] 8it [00:01, 4.82it/s] 9it [00:01, 4.82it/s] 10it [00:02, 4.82it/s] 11it [00:02, 4.82it/s] 12it [00:02, 4.82it/s] 13it [00:02, 4.82it/s] 14it [00:02, 4.82it/s] 15it [00:03, 4.82it/s] 16it [00:03, 4.82it/s] 17it [00:03, 4.82it/s] 18it [00:03, 4.81it/s] 19it [00:03, 4.82it/s] 20it [00:04, 4.81it/s] 21it [00:04, 4.82it/s] 22it [00:04, 4.82it/s] 23it [00:04, 4.82it/s] 24it [00:04, 4.82it/s] 25it [00:05, 4.82it/s] 26it [00:05, 4.83it/s] 27it [00:05, 4.84it/s] 28it [00:05, 4.84it/s] 28it [00:05, 4.83it/s] 0it [00:00, ?it/s] 1it [00:00, 4.86it/s] 2it [00:00, 4.82it/s] 3it [00:00, 4.82it/s] 4it [00:00, 4.82it/s] 5it [00:01, 4.82it/s] 6it [00:01, 4.82it/s] 7it [00:01, 4.81it/s] 8it [00:01, 4.82it/s] 9it [00:01, 4.83it/s] 10it [00:02, 4.83it/s] 11it [00:02, 4.82it/s] 12it [00:02, 4.82it/s] 13it [00:02, 4.81it/s] 14it [00:02, 4.82it/s] 15it [00:03, 4.82it/s] 16it [00:03, 4.82it/s] 17it [00:03, 4.83it/s] 18it [00:03, 4.83it/s] 19it [00:03, 4.81it/s] 20it [00:04, 4.82it/s] 21it [00:04, 4.82it/s] 22it [00:04, 4.81it/s] 23it [00:04, 4.81it/s] 24it [00:04, 4.81it/s] 25it [00:05, 4.81it/s] 26it [00:05, 4.81it/s] 27it [00:05, 4.81it/s] 28it [00:05, 4.82it/s] 28it [00:05, 4.82it/s] Total safe images: 4 out of 4
Version Details
- Version ID
e889b6d4ee1f2e88188026d7052ac6f6c4afcb074d6e5817838cd214787d766e- Version Created
- January 14, 2025