digital-prairie-labs/catholic-prayers-v2.1 πΌοΈπ’βπβ β πΌοΈ
About
Example Output
Prompt:
"Dios te salve, MarΓa, llena eres de gracia; el SeΓ±or es contigo."
Output
Performance Metrics
3.91s
Prediction Time
4.00s
Total Time
All Input Parameters
{
"model": "dev",
"prompt": "Dios te salve, MarΓa, llena eres de gracia; el SeΓ±or es contigo.",
"go_fast": true,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "9:16",
"output_format": "png",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
2025-08-08 23:40:51.992 | INFO | fp8.lora_loading:restore_base_weights:600 - Unloaded 304 layers 2025-08-08 23:40:51.995 | SUCCESS | fp8.lora_loading:unload_loras:571 - LoRAs unloaded in 0.026s free=24836687204352 Downloading weights 2025-08-08T23:40:52Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp5j8tozfw/weights url=https://replicate.delivery/xezq/O3xVDeFkhfvyYEcRUU0SKSHLIegK7N1c7GM5M3J7FkmFEHSqA/flux-lora.tar 2025-08-08T23:40:52Z | INFO | [ Cache Service ] enabled=true scheme=http target=hermes.services.svc.cluster.local 2025-08-08T23:40:52Z | INFO | [ Cache URL Rewrite ] enabled=true target_url=http://hermes.services.svc.cluster.local/replicate.delivery/xezq/O3xVDeFkhfvyYEcRUU0SKSHLIegK7N1c7GM5M3J7FkmFEHSqA/flux-lora.tar url=https://replicate.delivery/xezq/O3xVDeFkhfvyYEcRUU0SKSHLIegK7N1c7GM5M3J7FkmFEHSqA/flux-lora.tar 2025-08-08T23:40:52Z | INFO | [ Redirect ] redirect_url=http://r8-east4-loras-ric1.cwlota.com/b5f1af286c854706123f3523a854c18db46d3bb6a8e4d3ecfb2d18735ad0b979?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Checksum-Mode=ENABLED&X-Amz-Credential=CWNZUVKLDHXVHEZN%2F20250808%2FUS-EAST-04A%2Fs3%2Faws4_request&X-Amz-Date=20250808T234052Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=b5e1a6caa5eac90880be64d06ad458e4644cb5bc6863cc9de30f9a8722212fc1 url=http://hermes.services.svc.cluster.local/replicate.delivery/xezq/O3xVDeFkhfvyYEcRUU0SKSHLIegK7N1c7GM5M3J7FkmFEHSqA/flux-lora.tar 2025-08-08T23:40:52Z | INFO | [ Complete ] dest=/tmp/tmp5j8tozfw/weights size="172 MB" total_elapsed=0.244s url=https://replicate.delivery/xezq/O3xVDeFkhfvyYEcRUU0SKSHLIegK7N1c7GM5M3J7FkmFEHSqA/flux-lora.tar Downloaded weights in 0.30s 2025-08-08 23:40:52.299 | INFO | fp8.lora_loading:convert_lora_weights:502 - Loading LoRA weights for /src/weights-cache/3bc843d1d927a93d 2025-08-08 23:40:52.373 | INFO | fp8.lora_loading:convert_lora_weights:523 - LoRA weights loaded 2025-08-08 23:40:52.373 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:610 - Extracting keys 2025-08-08 23:40:52.373 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:617 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 43%|βββββ | 131/304 [00:00<00:00, 1301.98it/s] Applying LoRA: 86%|βββββββββ | 262/304 [00:00<00:00, 1040.80it/s] Applying LoRA: 100%|ββββββββββ| 304/304 [00:00<00:00, 1059.70it/s] 2025-08-08 23:40:52.660 | INFO | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:669 - Loading LoRA in fp8 2025-08-08 23:40:52.661 | SUCCESS | fp8.lora_loading:load_lora:547 - LoRA applied in 0.36s running quantized prediction Using seed: 3911864408 0%| | 0/28 [00:00<?, ?it/s] 7%|β | 2/28 [00:00<00:01, 19.92it/s] 14%|ββ | 4/28 [00:00<00:01, 13.17it/s] 21%|βββ | 6/28 [00:00<00:01, 11.90it/s] 29%|βββ | 8/28 [00:00<00:01, 11.36it/s] 36%|ββββ | 10/28 [00:00<00:01, 10.97it/s] 43%|βββββ | 12/28 [00:01<00:01, 10.66it/s] 50%|βββββ | 14/28 [00:01<00:01, 10.61it/s] 57%|ββββββ | 16/28 [00:01<00:01, 10.59it/s] 64%|βββββββ | 18/28 [00:01<00:00, 10.62it/s] 71%|ββββββββ | 20/28 [00:01<00:00, 10.52it/s] 79%|ββββββββ | 22/28 [00:02<00:00, 10.40it/s] 86%|βββββββββ | 24/28 [00:02<00:00, 10.38it/s] 93%|ββββββββββ| 26/28 [00:02<00:00, 10.42it/s] 100%|ββββββββββ| 28/28 [00:02<00:00, 10.47it/s] 100%|ββββββββββ| 28/28 [00:02<00:00, 10.82it/s] Total safe images: 1 out of 1
Version Details
- Version ID
a433703fe1b6e92369fa6917878472ae703a3663f726c6da89a063ca78208f43- Version Created
- August 8, 2025