doobls-ai/golliersrasse-selfcaptioned πΌοΈπ’βπβ β πΌοΈ
About
Example Output
"
golliersrasse Imagine a modern office room designed for productivity and comfort. A central desk faces the entrance, equipped with a computer setup: a monitor, keyboard, and mouse. To the right of the monitor, thereβs a pen holder filled with writing tools and a notepad beside it. On the left, a framed photograph adds a personal touch, while a desk lamp in the far-left corner provides focused lighting.
Within armβs reach on the right side stands a filing cabinet, while a nearby printer station includes a paper supply shelf. A cozy seating area with two chairs and a small coffee table is positioned in the opposite corner from the entrance. Against the left wall, a bookshelf displays books and decorative items. Behind the desk, a mounted whiteboard offers space for quick notes and brainstorming sessions.
"Output


Performance Metrics
All Input Parameters
{
"model": "dev",
"prompt": "golliersrasse Imagine a modern office room designed for productivity and comfort. A central desk faces the entrance, equipped with a computer setup: a monitor, keyboard, and mouse. To the right of the monitor, thereβs a pen holder filled with writing tools and a notepad beside it. On the left, a framed photograph adds a personal touch, while a desk lamp in the far-left corner provides focused lighting.\n\nWithin armβs reach on the right side stands a filing cabinet, while a nearby printer station includes a paper supply shelf. A cozy seating area with two chairs and a small coffee table is positioned in the opposite corner from the entrance. Against the left wall, a bookshelf displays books and decorative items. Behind the desk, a mounted whiteboard offers space for quick notes and brainstorming sessions.",
"go_fast": false,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 3,
"aspect_ratio": "1:1",
"output_format": "webp",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
2024-12-11 07:21:35.118 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-11 07:21:35.119 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 92%|ββββββββββ| 281/304 [00:00<00:00, 2800.11it/s] Applying LoRA: 100%|ββββββββββ| 304/304 [00:00<00:00, 2681.05it/s] 2024-12-11 07:21:35.232 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.11s free=28928929992704 Downloading weights 2024-12-11T07:21:35Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp_s58sdye/weights url=https://replicate.delivery/xezq/45jFefGsuPsV60nRAfff5gIE6xNO5fejpbRMbRal55W1H8z8JA/trained_model.tar 2024-12-11T07:21:39Z | INFO | [ Complete ] dest=/tmp/tmp_s58sdye/weights size="172 MB" total_elapsed=4.213s url=https://replicate.delivery/xezq/45jFefGsuPsV60nRAfff5gIE6xNO5fejpbRMbRal55W1H8z8JA/trained_model.tar Downloaded weights in 4.24s 2024-12-11 07:21:39.472 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/b2a87e8999153267 2024-12-11 07:21:39.547 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2024-12-11 07:21:39.547 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-11 07:21:39.548 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 92%|ββββββββββ| 281/304 [00:00<00:00, 2802.59it/s] Applying LoRA: 100%|ββββββββββ| 304/304 [00:00<00:00, 2683.41it/s] 2024-12-11 07:21:39.661 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s Using seed: 9073 0it [00:00, ?it/s] 1it [00:00, 8.33it/s] 2it [00:00, 5.81it/s] 3it [00:00, 5.29it/s] 4it [00:00, 5.08it/s] 5it [00:00, 4.94it/s] 6it [00:01, 4.85it/s] 7it [00:01, 4.82it/s] 8it [00:01, 4.81it/s] 9it [00:01, 4.80it/s] 10it [00:02, 4.78it/s] 11it [00:02, 4.77it/s] 12it [00:02, 4.77it/s] 13it [00:02, 4.78it/s] 14it [00:02, 4.78it/s] 15it [00:03, 4.74it/s] 16it [00:03, 4.74it/s] 17it [00:03, 4.74it/s] 18it [00:03, 4.74it/s] 19it [00:03, 4.75it/s] 20it [00:04, 4.75it/s] 21it [00:04, 4.76it/s] 22it [00:04, 4.76it/s] 23it [00:04, 4.76it/s] 24it [00:04, 4.76it/s] 25it [00:05, 4.75it/s] 26it [00:05, 4.75it/s] 27it [00:05, 4.75it/s] 28it [00:05, 4.76it/s] 28it [00:05, 4.83it/s] 0it [00:00, ?it/s] 1it [00:00, 4.81it/s] 2it [00:00, 4.76it/s] 3it [00:00, 4.77it/s] 4it [00:00, 4.76it/s] 5it [00:01, 4.76it/s] 6it [00:01, 4.75it/s] 7it [00:01, 4.74it/s] 8it [00:01, 4.75it/s] 9it [00:01, 4.75it/s] 10it [00:02, 4.75it/s] 11it [00:02, 4.75it/s] 12it [00:02, 4.75it/s] 13it [00:02, 4.76it/s] 14it [00:02, 4.76it/s] 15it [00:03, 4.76it/s] 16it [00:03, 4.76it/s] 17it [00:03, 4.74it/s] 18it [00:03, 4.75it/s] 19it [00:03, 4.75it/s] 20it [00:04, 4.75it/s] 21it [00:04, 4.75it/s] 22it [00:04, 4.75it/s] 23it [00:04, 4.75it/s] 24it [00:05, 4.75it/s] 25it [00:05, 4.75it/s] 26it [00:05, 4.74it/s] 27it [00:05, 4.73it/s] 28it [00:05, 4.74it/s] 28it [00:05, 4.75it/s] 0it [00:00, ?it/s] 1it [00:00, 4.79it/s] 2it [00:00, 4.75it/s] 3it [00:00, 4.75it/s] 4it [00:00, 4.74it/s] 5it [00:01, 4.74it/s] 6it [00:01, 4.76it/s] 7it [00:01, 4.75it/s] 8it [00:01, 4.75it/s] 9it [00:01, 4.75it/s] 10it [00:02, 4.74it/s] 11it [00:02, 4.75it/s] 12it [00:02, 4.75it/s] 13it [00:02, 4.76it/s] 14it [00:02, 4.76it/s] 15it [00:03, 4.75it/s] 16it [00:03, 4.75it/s] 17it [00:03, 4.75it/s] 18it [00:03, 4.74it/s] 19it [00:04, 4.74it/s] 20it [00:04, 4.74it/s] 21it [00:04, 4.74it/s] 22it [00:04, 4.75it/s] 23it [00:04, 4.74it/s] 24it [00:05, 4.74it/s] 25it [00:05, 4.76it/s] 26it [00:05, 4.76it/s] 27it [00:05, 4.75it/s] 28it [00:05, 4.74it/s] 28it [00:05, 4.75it/s] Total safe images: 3 out of 3
Version Details
- Version ID
7d2bc783d7bf478a05dcae34e334753d840ce79495df1a8560e0dc1851223a0d- Version Created
- December 10, 2024