doobls-ai/interor-2 πΌοΈπ’βπβ β πΌοΈ
About

Example Output
"
interior-2 Imagine a modern office room designed for productivity and comfort. A central desk faces the entrance, equipped with a computer setup: a monitor, keyboard, and mouse. To the right of the monitor, thereβs a pen holder filled with writing tools and a notepad beside it. On the left, a framed photograph adds a personal touch, while a desk lamp in the far-left corner provides focused lighting.
Within armβs reach on the right side stands a filing cabinet, while a nearby printer station includes a paper supply shelf. A cozy seating area with two chairs and a small coffee table is positioned in the opposite corner from the entrance. Against the left wall, a bookshelf displays books and decorative items. Behind the desk, a mounted whiteboard offers space for quick notes and brainstorming sessions.
"Output



Performance Metrics
All Input Parameters
{ "model": "dev", "prompt": "interior-2 Imagine a modern office room designed for productivity and comfort. A central desk faces the entrance, equipped with a computer setup: a monitor, keyboard, and mouse. To the right of the monitor, thereβs a pen holder filled with writing tools and a notepad beside it. On the left, a framed photograph adds a personal touch, while a desk lamp in the far-left corner provides focused lighting.\n\nWithin armβs reach on the right side stands a filing cabinet, while a nearby printer station includes a paper supply shelf. A cozy seating area with two chairs and a small coffee table is positioned in the opposite corner from the entrance. Against the left wall, a bookshelf displays books and decorative items. Behind the desk, a mounted whiteboard offers space for quick notes and brainstorming sessions.", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 3, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
2024-12-11 13:27:33.282 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-11 13:27:33.282 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 93%|ββββββββββ| 282/304 [00:00<00:00, 2819.83it/s] Applying LoRA: 100%|ββββββββββ| 304/304 [00:00<00:00, 2693.95it/s] 2024-12-11 13:27:33.395 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.11s free=29030582161408 Downloading weights 2024-12-11T13:27:33Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp_m0z9zh9/weights url=https://replicate.delivery/xezq/AF86GWRJlv7aIRYeAUIfANkEzDlJcihKI371On0siG2e7CznA/trained_model.tar 2024-12-11T13:27:36Z | INFO | [ Complete ] dest=/tmp/tmp_m0z9zh9/weights size="172 MB" total_elapsed=2.681s url=https://replicate.delivery/xezq/AF86GWRJlv7aIRYeAUIfANkEzDlJcihKI371On0siG2e7CznA/trained_model.tar Downloaded weights in 2.71s 2024-12-11 13:27:36.104 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/02f533e00577f63e 2024-12-11 13:27:36.178 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2024-12-11 13:27:36.179 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2024-12-11 13:27:36.179 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 93%|ββββββββββ| 283/304 [00:00<00:00, 2787.17it/s] Applying LoRA: 100%|ββββββββββ| 304/304 [00:00<00:00, 2696.24it/s] 2024-12-11 13:27:36.292 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s Using seed: 63678 0it [00:00, ?it/s] 1it [00:00, 8.36it/s] 2it [00:00, 5.86it/s] 3it [00:00, 5.34it/s] 4it [00:00, 5.13it/s] 5it [00:00, 5.02it/s] 6it [00:01, 4.95it/s] 7it [00:01, 4.91it/s] 8it [00:01, 4.88it/s] 9it [00:01, 4.86it/s] 10it [00:01, 4.85it/s] 11it [00:02, 4.85it/s] 12it [00:02, 4.84it/s] 13it [00:02, 4.84it/s] 14it [00:02, 4.83it/s] 15it [00:03, 4.83it/s] 16it [00:03, 4.83it/s] 17it [00:03, 4.83it/s] 18it [00:03, 4.83it/s] 19it [00:03, 4.83it/s] 20it [00:04, 4.83it/s] 21it [00:04, 4.83it/s] 22it [00:04, 4.83it/s] 23it [00:04, 4.83it/s] 24it [00:04, 4.83it/s] 25it [00:05, 4.82it/s] 26it [00:05, 4.82it/s] 27it [00:05, 4.82it/s] 28it [00:05, 4.82it/s] 28it [00:05, 4.90it/s] 0it [00:00, ?it/s] 1it [00:00, 4.87it/s] 2it [00:00, 4.85it/s] 3it [00:00, 4.83it/s] 4it [00:00, 4.83it/s] 5it [00:01, 4.83it/s] 6it [00:01, 4.83it/s] 7it [00:01, 4.83it/s] 8it [00:01, 4.82it/s] 9it [00:01, 4.82it/s] 10it [00:02, 4.82it/s] 11it [00:02, 4.81it/s] 12it [00:02, 4.81it/s] 13it [00:02, 4.82it/s] 14it [00:02, 4.82it/s] 15it [00:03, 4.82it/s] 16it [00:03, 4.82it/s] 17it [00:03, 4.82it/s] 18it [00:03, 4.83it/s] 19it [00:03, 4.82it/s] 20it [00:04, 4.83it/s] 21it [00:04, 4.83it/s] 22it [00:04, 4.82it/s] 23it [00:04, 4.82it/s] 24it [00:04, 4.82it/s] 25it [00:05, 4.82it/s] 26it [00:05, 4.83it/s] 27it [00:05, 4.82it/s] 28it [00:05, 4.82it/s] 28it [00:05, 4.82it/s] 0it [00:00, ?it/s] 1it [00:00, 4.89it/s] 2it [00:00, 4.84it/s] 3it [00:00, 4.83it/s] 4it [00:00, 4.82it/s] 5it [00:01, 4.82it/s] 6it [00:01, 4.82it/s] 7it [00:01, 4.82it/s] 8it [00:01, 4.82it/s] 9it [00:01, 4.81it/s] 10it [00:02, 4.82it/s] 11it [00:02, 4.81it/s] 12it [00:02, 4.81it/s] 13it [00:02, 4.81it/s] 14it [00:02, 4.82it/s] 15it [00:03, 4.82it/s] 16it [00:03, 4.82it/s] 17it [00:03, 4.82it/s] 18it [00:03, 4.82it/s] 19it [00:03, 4.82it/s] 20it [00:04, 4.81it/s] 21it [00:04, 4.81it/s] 22it [00:04, 4.81it/s] 23it [00:04, 4.81it/s] 24it [00:04, 4.81it/s] 25it [00:05, 4.80it/s] 26it [00:05, 4.81it/s] 27it [00:05, 4.81it/s] 28it [00:05, 4.82it/s] 28it [00:05, 4.82it/s] Total safe images: 3 out of 3
Version Details
- Version ID
91f2ef63c76a73d2ec4c67cf7b2a9672e074046cf4fde1d98e46a5829f7ea68b
- Version Created
- December 10, 2024