adcopy-ai/reversedhat 🖼️🔢❓📝✓ → 🖼️
About
Example Output
Prompt:
"
Create an image of a stylish man wearing a REVERSEDHAT with the word 'TAMPA' prominently displayed on the front, but with the letters mirrored or flipped (i.e., '∀ԀW∀⊥'). He should be standing on a rooftop with a city skyline in the background. The man should be dressed in a trendy outfit and have a relaxed expression. The city lights should be visible in the fading light.
Ensure that the mirrored word '∀ԀW∀⊥' is clearly visible on the front of the hat, with no additional text in the image.
"Output


Performance Metrics
84.32s
Prediction Time
84.33s
Total Time
All Input Parameters
{
"model": "dev",
"width": 720,
"height": 720,
"prompt": "Create an image of a stylish man wearing a REVERSEDHAT with the word 'TAMPA' prominently displayed on the front, but with the letters mirrored or flipped (i.e., '∀ԀW∀⊥'). He should be standing on a rooftop with a city skyline in the background. The man should be dressed in a trendy outfit and have a relaxed expression. The city lights should be visible in the fading light.\n\nEnsure that the mirrored word '∀ԀW∀⊥' is clearly visible on the front of the hat, with no additional text in the image.",
"lora_scale": 1,
"num_outputs": 3,
"aspect_ratio": "1:1",
"output_format": "jpg",
"guidance_scale": 4.5,
"output_quality": 100,
"prompt_strength": 1,
"extra_lora_scale": 1,
"num_inference_steps": 50
}
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
Using seed: 2725 Prompt: Create an image of a stylish man wearing a REVERSEDHAT with the word 'TAMPA' prominently displayed on the front, but with the letters mirrored or flipped (i.e., '∀ԀW∀⊥'). He should be standing on a rooftop with a city skyline in the background. The man should be dressed in a trendy outfit and have a relaxed expression. The city lights should be visible in the fading light. Ensure that the mirrored word '∀ԀW∀⊥' is clearly visible on the front of the hat, with no additional text in the image. [!] txt2img mode Using dev model free=3107932274688 Downloading weights 2024-09-27T15:16:05Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp0peqp_gd/weights url=https://replicate.delivery/yhqm/22Y5uX55oG4aMZQfhaygRTIBVjZpU052xi3q0waBTo5RjjwJA/trained_model.tar 2024-09-27T15:16:06Z | INFO | [ Complete ] dest=/tmp/tmp0peqp_gd/weights size="172 MB" total_elapsed=1.334s url=https://replicate.delivery/yhqm/22Y5uX55oG4aMZQfhaygRTIBVjZpU052xi3q0waBTo5RjjwJA/trained_model.tar Downloaded weights in 1.36s Loaded LoRAs in 3.04s 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:01<01:17, 1.58s/it] 4%|▍ | 2/50 [00:02<01:05, 1.37s/it] 6%|▌ | 3/50 [00:04<01:09, 1.47s/it] 8%|▊ | 4/50 [00:05<01:09, 1.52s/it] 10%|█ | 5/50 [00:07<01:09, 1.54s/it] 12%|█▏ | 6/50 [00:09<01:08, 1.56s/it] 14%|█▍ | 7/50 [00:10<01:07, 1.57s/it] 16%|█▌ | 8/50 [00:12<01:06, 1.57s/it] 18%|█▊ | 9/50 [00:13<01:04, 1.58s/it] 20%|██ | 10/50 [00:15<01:03, 1.58s/it] 22%|██▏ | 11/50 [00:17<01:01, 1.58s/it] 24%|██▍ | 12/50 [00:18<01:00, 1.59s/it] 26%|██▌ | 13/50 [00:20<00:58, 1.59s/it] 28%|██▊ | 14/50 [00:21<00:57, 1.59s/it] 30%|███ | 15/50 [00:23<00:55, 1.59s/it] 32%|███▏ | 16/50 [00:25<00:54, 1.59s/it] 34%|███▍ | 17/50 [00:26<00:52, 1.59s/it] 36%|███▌ | 18/50 [00:28<00:50, 1.59s/it] 38%|███▊ | 19/50 [00:29<00:49, 1.59s/it] 40%|████ | 20/50 [00:31<00:47, 1.59s/it] 42%|████▏ | 21/50 [00:33<00:46, 1.59s/it] 44%|████▍ | 22/50 [00:34<00:44, 1.59s/it] 46%|████▌ | 23/50 [00:36<00:42, 1.59s/it] 48%|████▊ | 24/50 [00:37<00:41, 1.59s/it] 50%|█████ | 25/50 [00:39<00:39, 1.59s/it] 52%|█████▏ | 26/50 [00:40<00:38, 1.59s/it] 54%|█████▍ | 27/50 [00:42<00:36, 1.59s/it] 56%|█████▌ | 28/50 [00:44<00:35, 1.59s/it] 58%|█████▊ | 29/50 [00:45<00:33, 1.59s/it] 60%|██████ | 30/50 [00:47<00:31, 1.59s/it] 62%|██████▏ | 31/50 [00:48<00:30, 1.59s/it] 64%|██████▍ | 32/50 [00:50<00:28, 1.59s/it] 66%|██████▌ | 33/50 [00:52<00:27, 1.59s/it] 68%|██████▊ | 34/50 [00:53<00:25, 1.59s/it] 70%|███████ | 35/50 [00:55<00:23, 1.59s/it] 72%|███████▏ | 36/50 [00:56<00:22, 1.59s/it] 74%|███████▍ | 37/50 [00:58<00:20, 1.59s/it] 76%|███████▌ | 38/50 [01:00<00:19, 1.59s/it] 78%|███████▊ | 39/50 [01:01<00:17, 1.59s/it] 80%|████████ | 40/50 [01:03<00:15, 1.59s/it] 82%|████████▏ | 41/50 [01:04<00:14, 1.59s/it] 84%|████████▍ | 42/50 [01:06<00:12, 1.59s/it] 86%|████████▌ | 43/50 [01:08<00:11, 1.59s/it] 88%|████████▊ | 44/50 [01:09<00:09, 1.59s/it] 90%|█████████ | 45/50 [01:11<00:07, 1.59s/it] 92%|█████████▏| 46/50 [01:12<00:06, 1.59s/it] 94%|█████████▍| 47/50 [01:14<00:04, 1.59s/it] 96%|█████████▌| 48/50 [01:15<00:03, 1.59s/it] 98%|█████████▊| 49/50 [01:17<00:01, 1.59s/it] 100%|██████████| 50/50 [01:19<00:00, 1.59s/it] 100%|██████████| 50/50 [01:19<00:00, 1.58s/it]
Version Details
- Version ID
2cc85d0e3d909b371b453456a6ce1dbe0a07f16543bfccc44a217c97499e2b49- Version Created
- September 30, 2024