amhage/theamberhage πΌοΈπ’βπβ β πΌοΈ
About
Example Output
Prompt:
"theamberhage A stylish woman walking confidently down a bustling street in New York City, surrounded by taxis, tall buildings, and vibrant city life. Sheβs wearing a chic, fashion-forward outfit with layered textures and bold accessories, wind gently catching her long wavy brown hair. Candid street-style photography, golden hour lighting, shallow depth of field, editorial Vogue-style aesthetic."
Output
Performance Metrics
10.41s
Prediction Time
10.51s
Total Time
All Input Parameters
{
"image": "https://replicate.delivery/pbxt/NF9IICPrVNOaNsP39mmtXvcLssVnLqfyMn4lobr6EkhzgdYu/amber%20wearing%20robe%20with%20her%20hair%20down.jpg",
"model": "dev",
"prompt": "theamberhage A stylish woman walking confidently down a bustling street in New York City, surrounded by taxis, tall buildings, and vibrant city life. Sheβs wearing a chic, fashion-forward outfit with layered textures and bold accessories, wind gently catching her long wavy brown hair. Candid street-style photography, golden hour lighting, shallow depth of field, editorial Vogue-style aesthetic.",
"go_fast": false,
"lora_scale": 1,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "jpg",
"guidance_scale": 3,
"output_quality": 80,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
free=25836122812416 Downloading weights 2025-06-24T18:44:02Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpv80gvdno/weights url=https://replicate.delivery/xezq/ADgy3JCRxO5cOVeZqnfh9Jp2A2fOwp5CU2fyYgKpSV04OmmTB/trained_model.tar 2025-06-24T18:44:02Z | INFO | [ Cache Service ] enabled=true scheme=http target=hermes.services.svc.cluster.local 2025-06-24T18:44:02Z | INFO | [ Cache URL Rewrite ] enabled=true target_url=http://hermes.services.svc.cluster.local/replicate.delivery/xezq/ADgy3JCRxO5cOVeZqnfh9Jp2A2fOwp5CU2fyYgKpSV04OmmTB/trained_model.tar url=https://replicate.delivery/xezq/ADgy3JCRxO5cOVeZqnfh9Jp2A2fOwp5CU2fyYgKpSV04OmmTB/trained_model.tar 2025-06-24T18:44:02Z | INFO | [ Redirect ] redirect_url=http://r8-east4-loras-ric1.cwlota.com/8e8b30cae8e716e4954bd05e456b3b7467e966580f868b412dde0615902acec9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Checksum-Mode=ENABLED&X-Amz-Credential=CWNZUVKLDHXVHEZN%2F20250624%2FUS-EAST-04A%2Fs3%2Faws4_request&X-Amz-Date=20250624T184402Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=911eda406acbc29e5ab812d2fa3b222c09328d3892f2ec18130abffbe1a84a28 url=http://hermes.services.svc.cluster.local/replicate.delivery/xezq/ADgy3JCRxO5cOVeZqnfh9Jp2A2fOwp5CU2fyYgKpSV04OmmTB/trained_model.tar 2025-06-24T18:44:02Z | INFO | [ Complete ] dest=/tmp/tmpv80gvdno/weights size="172 MB" total_elapsed=0.455s url=https://replicate.delivery/xezq/ADgy3JCRxO5cOVeZqnfh9Jp2A2fOwp5CU2fyYgKpSV04OmmTB/trained_model.tar Downloaded weights in 0.52s Loaded LoRAs in 1.11s Using seed: 50519 Prompt: theamberhage A stylish woman walking confidently down a bustling street in New York City, surrounded by taxis, tall buildings, and vibrant city life. Sheβs wearing a chic, fashion-forward outfit with layered textures and bold accessories, wind gently catching her long wavy brown hair. Candid street-style photography, golden hour lighting, shallow depth of field, editorial Vogue-style aesthetic. Input image size: 3535x5303 [!] Resizing input image from 3535x5303 to 960x1440 [!] img2img mode 0%| | 0/23 [00:00<?, ?it/s] 4%|β | 1/23 [00:00<00:06, 3.62it/s] 9%|β | 2/23 [00:00<00:06, 3.16it/s] 13%|ββ | 3/23 [00:00<00:06, 3.01it/s] 17%|ββ | 4/23 [00:01<00:06, 2.95it/s] 22%|βββ | 5/23 [00:01<00:06, 2.92it/s] 26%|βββ | 6/23 [00:02<00:05, 2.90it/s] 30%|βββ | 7/23 [00:02<00:05, 2.89it/s] 35%|ββββ | 8/23 [00:02<00:05, 2.88it/s] 39%|ββββ | 9/23 [00:03<00:04, 2.87it/s] 43%|βββββ | 10/23 [00:03<00:04, 2.87it/s] 48%|βββββ | 11/23 [00:03<00:04, 2.87it/s] 52%|ββββββ | 12/23 [00:04<00:03, 2.87it/s] 57%|ββββββ | 13/23 [00:04<00:03, 2.87it/s] 61%|ββββββ | 14/23 [00:04<00:03, 2.87it/s] 65%|βββββββ | 15/23 [00:05<00:02, 2.87it/s] 70%|βββββββ | 16/23 [00:05<00:02, 2.87it/s] 74%|ββββββββ | 17/23 [00:05<00:02, 2.87it/s] 78%|ββββββββ | 18/23 [00:06<00:01, 2.87it/s] 83%|βββββββββ | 19/23 [00:06<00:01, 2.87it/s] 87%|βββββββββ | 20/23 [00:06<00:01, 2.87it/s] 91%|ββββββββββ| 21/23 [00:07<00:00, 2.87it/s] 96%|ββββββββββ| 22/23 [00:07<00:00, 2.87it/s] 100%|ββββββββββ| 23/23 [00:07<00:00, 2.87it/s] 100%|ββββββββββ| 23/23 [00:07<00:00, 2.89it/s] Total safe images: 1 out of 1
Version Details
- Version ID
5def7f40b7319860f90ae6b0a05cdcf7781142af34448c6aaaa3c1e20c79a1c2- Version Created
- June 23, 2025