klassenmedia/henna_moosmed 🖼️🔢❓📝✓ → 🖼️
About
Example Output
Prompt:
" a handcrafted artisanal soap HENNA_MOOSMED standing in the sand in the desert in the sand, wide angle, blurry bokeh, in the bright environment, next to a henna bush, during a sunset in the background, shot with a Sony Alpha 7R IV, 50mm f/1.4 lens, warm and soft color palette, a beautiful woman JUGO in the background"
Output
Performance Metrics
13.24s
Prediction Time
13.26s
Total Time
All Input Parameters
{
"model": "dev",
"prompt": " a handcrafted artisanal soap HENNA_MOOSMED standing in the sand in the desert in the sand, wide angle, blurry bokeh, in the bright environment, next to a henna bush, during a sunset in the background, shot with a Sony Alpha 7R IV, 50mm f/1.4 lens, warm and soft color palette, a beautiful woman JUGO in the background",
"go_fast": false,
"extra_lora": "https://huggingface.co/drface/JUGO",
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "4:5",
"output_format": "jpg",
"guidance_scale": 4.5,
"output_quality": 100,
"prompt_strength": 0.89,
"extra_lora_scale": 1.07,
"num_inference_steps": 28
}
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
2025-01-22 19:45:36.938 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-22 19:45:36.938 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2749.80it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2691.69it/s] 2025-01-22 19:45:37.052 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-22 19:45:37.052 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 61%|██████ | 186/304 [00:00<00:00, 1836.34it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 1869.57it/s] 2025-01-22 19:45:37.215 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.28s free=28663100534784 Downloading weights 2025-01-22T19:45:37Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpty2jsw5d/weights url=https://replicate.delivery/xezq/i2ARRIKTnupNAdm4fCfRI5nYk7oamBWGRDh7idgSN166U0CUA/trained_model.tar 2025-01-22T19:45:40Z | INFO | [ Complete ] dest=/tmp/tmpty2jsw5d/weights size="172 MB" total_elapsed=2.834s url=https://replicate.delivery/xezq/i2ARRIKTnupNAdm4fCfRI5nYk7oamBWGRDh7idgSN166U0CUA/trained_model.tar Downloaded weights in 2.86s free=28662927929344 Downloading weights 2025-01-22T19:45:40Z | INFO | [ Initiating ] chunk_size=150M dest=/src/weights-cache/231e317ec4ace7ba url=https://huggingface.co/drface/JUGO/resolve/main/lora.safetensors 2025-01-22T19:45:40Z | INFO | [ Redirect ] redirect_url=https://cdn-lfs-us-1.hf.co/repos/1a/27/1a27617a7b3451268c65d844a4268b0f0de5463f153b731247d4b10d02c117a9/d524f2a98f3bf6eba462ef73d6f7b9579c1ca26d4022d8195349916682f7395c?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27lora.safetensors%3B+filename%3D%22lora.safetensors%22%3B&Expires=1737578740&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczNzU3ODc0MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzFhLzI3LzFhMjc2MTdhN2IzNDUxMjY4YzY1ZDg0NGE0MjY4YjBmMGRlNTQ2M2YxNTNiNzMxMjQ3ZDRiMTBkMDJjMTE3YTkvZDUyNGYyYTk4ZjNiZjZlYmE0NjJlZjczZDZmN2I5NTc5YzFjYTI2ZDQwMjJkODE5NTM0OTkxNjY4MmY3Mzk1Yz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=hO0lh6WMi0g1cit9JIxxubMjC6rat891ahs848g78Rg7Uz9uSaWAkz3OaOJ6NSfkU5Y8CWENT7BTIWNT7TAKCFTLMCexlgnI2Q48sLbD%7EienYUurHoAmUAAONzhBUBZH20-AKIhNVHiFuWQXsyganoTvnzo1Nl-EPtuqF1gUcM5rFrpPcco25XKyXjal0jyIY8LEYp-F5O8sMI91gaD3JRGMgPAOVUhIepMBSs2cTHxPw8s0O7Sik8OulAkTP9RV6bUjRBe7TSfydEwNQ-mwidCIaNhdfUsSttytiy10uXeaOyE6xubUCz1sO7Zrj7mfN1NXxqgSGJ%7ExOwOcHxoZBQ__&Key-Pair-Id=K24J24Z295AEI9 url=https://huggingface.co/drface/JUGO/resolve/main/lora.safetensors 2025-01-22T19:45:44Z | INFO | [ Complete ] dest=/src/weights-cache/231e317ec4ace7ba size="172 MB" total_elapsed=3.931s url=https://huggingface.co/drface/JUGO/resolve/main/lora.safetensors Downloaded weights in 3.95s 2025-01-22 19:45:44.128 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/8ca0f0fe91b38c96 2025-01-22 19:45:44.199 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2025-01-22 19:45:44.199 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-22 19:45:44.199 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2763.76it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2704.42it/s] 2025-01-22 19:45:44.312 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.18s 2025-01-22 19:45:44.312 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/231e317ec4ace7ba 2025-01-22 19:45:44.430 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2025-01-22 19:45:44.430 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-22 19:45:44.431 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 91%|█████████ | 277/304 [00:00<00:00, 2764.45it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2704.68it/s] 2025-01-22 19:45:44.543 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.23s Using seed: 26182 0it [00:00, ?it/s] 1it [00:00, 9.11it/s] 2it [00:00, 6.33it/s] 3it [00:00, 5.77it/s] 4it [00:00, 5.56it/s] 5it [00:00, 5.43it/s] 6it [00:01, 5.28it/s] 7it [00:01, 5.23it/s] 8it [00:01, 5.23it/s] 9it [00:01, 5.23it/s] 10it [00:01, 5.23it/s] 11it [00:02, 5.21it/s] 12it [00:02, 5.20it/s] 13it [00:02, 5.19it/s] 14it [00:02, 5.18it/s] 15it [00:02, 5.16it/s] 16it [00:03, 5.15it/s] 17it [00:03, 5.14it/s] 18it [00:03, 5.14it/s] 19it [00:03, 5.15it/s] 20it [00:03, 5.17it/s] 21it [00:03, 5.17it/s] 22it [00:04, 5.17it/s] 23it [00:04, 5.16it/s] 24it [00:04, 5.15it/s] 25it [00:04, 5.17it/s] 26it [00:04, 5.15it/s] 27it [00:05, 5.15it/s] 28it [00:05, 5.14it/s] 28it [00:05, 5.25it/s] Total safe images: 1 out of 1
Version Details
- Version ID
da589e5409edea031890c2ce23800a7bbb689267e27b307068aa0aa4a4c3e540- Version Created
- January 7, 2025