vincetheeleventh/flux-gramps01 🖼️🔢❓📝✓ → 🖼️
About
A fine-tuned FLUX.1 model
Example Output
Prompt:
"
An ink and watercolor illustration in the style of E.H. Shepard, featuring a warm and expressive scene. The linework is delicate and sketchy, with a flowing, organic quality, full of charm and gentle humor. The background is lightly detailed with soft, airy strokes, evoking a sense of nostalgia. The composition is balanced but informal, capturing a cozy, storybook atmosphere. The overall color palette is muted, with natural tones and delicate shading, resembling classic early 20th-century children's book illustrations.
illustration of an elderly chinese man ukj with stocky build wearing a leather jacket, playing tea party with cheerful granddaughter age 5 and stuffed animals
"Output
Performance Metrics
4.20s
Prediction Time
4.24s
Total Time
All Input Parameters
{
"model": "dev",
"prompt": "An ink and watercolor illustration in the style of E.H. Shepard, featuring a warm and expressive scene. The linework is delicate and sketchy, with a flowing, organic quality, full of charm and gentle humor. The background is lightly detailed with soft, airy strokes, evoking a sense of nostalgia. The composition is balanced but informal, capturing a cozy, storybook atmosphere. The overall color palette is muted, with natural tones and delicate shading, resembling classic early 20th-century children's book illustrations.\n\nillustration of an elderly chinese man ukj with stocky build wearing a leather jacket, playing tea party with cheerful granddaughter age 5 and stuffed animals ",
"go_fast": true,
"lora_scale": 0.5,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "1:1",
"output_format": "webp",
"guidance_scale": 5,
"output_quality": 80,
"prompt_strength": 1,
"extra_lora_scale": 1,
"num_inference_steps": 40
}
Input Parameters
- mask
- Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- seed
- Random seed. Set for reproducible generation
- image
- Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
- model
- Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
- width
- Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- height
- Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
- prompt (required)
- Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
- go_fast
- Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
- extra_lora
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- lora_scale
- Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- megapixels
- Approximate number of megapixels for generated image
- num_outputs
- Number of outputs to generate
- aspect_ratio
- Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
- output_format
- Format of the output images
- guidance_scale
- Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
- output_quality
- Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
- prompt_strength
- Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
- extra_lora_scale
- Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
- replicate_weights
- Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
- num_inference_steps
- Number of denoising steps. More steps can give more detailed images, but take longer.
- disable_safety_checker
- Disable safety checker for generated images.
Output Schema
Output
Example Execution Logs
2025-02-06 06:33:18.420 | INFO | fp8.lora_loading:restore_clones:592 - Unloaded 304 layers 2025-02-06 06:33:18.422 | SUCCESS | fp8.lora_loading:unload_loras:563 - LoRAs unloaded in 0.023s free=28896009428992 Downloading weights 2025-02-06T06:33:18Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmp5d61i51m/weights url=https://replicate.delivery/xezq/12RA57I3M5rHIRlgvWDQxaDmngFbPO11krJYSftviEJW7NGKA/trained_model.tar 2025-02-06T06:33:18Z | INFO | [ Complete ] dest=/tmp/tmp5d61i51m/weights size="4.7 MB" total_elapsed=0.095s url=https://replicate.delivery/xezq/12RA57I3M5rHIRlgvWDQxaDmngFbPO11krJYSftviEJW7NGKA/trained_model.tar Downloaded weights in 0.11s 2025-02-06 06:33:18.536 | INFO | fp8.lora_loading:convert_lora_weights:502 - Loading LoRA weights for /src/weights-cache/138282184bb37d7a 2025-02-06 06:33:18.538 | INFO | fp8.lora_loading:convert_lora_weights:523 - LoRA weights loaded 2025-02-06 06:33:18.538 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:602 - Extracting keys 2025-02-06 06:33:18.538 | DEBUG | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:609 - Keys extracted Applying LoRA: 0%| | 0/4 [00:00<?, ?it/s] Applying LoRA: 100%|██████████| 4/4 [00:00<00:00, 2553.61it/s] 2025-02-06 06:33:18.540 | INFO | fp8.lora_loading:apply_lora_to_model_and_optionally_store_clones:661 - Loading LoRA in fp8 2025-02-06 06:33:18.540 | SUCCESS | fp8.lora_loading:load_lora:542 - LoRA applied in 0.004s running quantized prediction Using seed: 2734417738 0%| | 0/40 [00:00<?, ?it/s] 5%|▌ | 2/40 [00:00<00:02, 17.07it/s] 10%|█ | 4/40 [00:00<00:02, 12.59it/s] 15%|█▌ | 6/40 [00:00<00:02, 11.58it/s] 20%|██ | 8/40 [00:00<00:02, 11.16it/s] 25%|██▌ | 10/40 [00:00<00:02, 10.86it/s] 30%|███ | 12/40 [00:01<00:02, 10.46it/s] 35%|███▌ | 14/40 [00:01<00:02, 10.49it/s] 40%|████ | 16/40 [00:01<00:02, 10.51it/s] 45%|████▌ | 18/40 [00:01<00:02, 10.53it/s] 50%|█████ | 20/40 [00:01<00:01, 10.49it/s] 55%|█████▌ | 22/40 [00:02<00:01, 10.38it/s] 60%|██████ | 24/40 [00:02<00:01, 10.35it/s] 65%|██████▌ | 26/40 [00:02<00:01, 10.41it/s] 70%|███████ | 28/40 [00:02<00:01, 10.42it/s] 75%|███████▌ | 30/40 [00:02<00:00, 10.43it/s] 80%|████████ | 32/40 [00:03<00:00, 10.36it/s] 85%|████████▌ | 34/40 [00:03<00:00, 10.38it/s] 90%|█████████ | 36/40 [00:03<00:00, 10.40it/s] 95%|█████████▌| 38/40 [00:03<00:00, 10.45it/s] 100%|██████████| 40/40 [00:03<00:00, 10.47it/s] 100%|██████████| 40/40 [00:03<00:00, 10.62it/s] Total safe images: 1 out of 1
Version Details
- Version ID
3c5446ab672252a9ff78cbf5ebbec5f6772b926833513f00349a719be559b414- Version Created
- March 1, 2025