vestigiaproject/firstempirepaintings πŸ–ΌοΈπŸ”’β“πŸ“βœ“ β†’ πŸ–ΌοΈ

▢️ 375 runs πŸ“… Dec 2024 βš™οΈ Cog 0.11.1
image-inpainting image-to-image lora text-to-image

About

An image model trained on portraits from Napoleonic era France, notably by painter FranΓ§ois GΓ©rard. Use FIRSTEMPIREPAINTINGS in the prompt.

Example Output

Prompt:

"A portrait of a lady, by FranΓ§ois GΓ©rard, oil on canvas, 1812, cracked varnish, FIRSTEMPIREPAINTINGS"

Output

Example output

Performance Metrics

8.63s Prediction Time
8.87s Total Time
All Input Parameters
{
  "seed": 9581,
  "model": "dev",
  "prompt": "A portrait of a lady, by FranΓ§ois GΓ©rard, oil on canvas, 1812, cracked varnish, FIRSTEMPIREPAINTINGS",
  "go_fast": false,
  "lora_scale": 1,
  "megapixels": "1",
  "num_outputs": 1,
  "aspect_ratio": "1:1",
  "output_format": "webp",
  "guidance_scale": 2,
  "output_quality": 80,
  "prompt_strength": 0.8,
  "extra_lora_scale": 1,
  "num_inference_steps": 28
}
Input Parameters
mask Type: string
Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
seed Type: integer
Random seed. Set for reproducible generation
image Type: string
Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
model Default: dev
Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
width Type: integerRange: 256 - 1440
Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
height Type: integerRange: 256 - 1440
Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
prompt (required) Type: string
Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
go_fast Type: booleanDefault: false
Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
extra_lora Type: string
Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
lora_scale Type: numberDefault: 1Range: -1 - 3
Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
megapixels Default: 1
Approximate number of megapixels for generated image
num_outputs Type: integerDefault: 1Range: 1 - 4
Number of outputs to generate
aspect_ratio Default: 1:1
Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
output_format Default: webp
Format of the output images
guidance_scale Type: numberDefault: 3Range: 0 - 10
Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
output_quality Type: integerDefault: 80Range: 0 - 100
Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
prompt_strength Type: numberDefault: 0.8Range: 0 - 1
Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
extra_lora_scale Type: numberDefault: 1Range: -1 - 3
Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
replicate_weights Type: string
Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
num_inference_steps Type: integerDefault: 28Range: 1 - 50
Number of denoising steps. More steps can give more detailed images, but take longer.
disable_safety_checker Type: booleanDefault: false
Disable safety checker for generated images.
Output Schema

Output

Type: array β€’ Items Type: string β€’ Items Format: uri

Example Execution Logs
2025-01-11 14:14:47.635 | DEBUG    | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2025-01-11 14:14:47.636 | DEBUG    | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA:   0%|          | 0/304 [00:00<?, ?it/s]
Applying LoRA:  93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 283/304 [00:00<00:00, 2812.77it/s]
Applying LoRA: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 304/304 [00:00<00:00, 2691.26it/s]
2025-01-11 14:14:47.749 | SUCCESS  | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.11s
free=29105989115904
Downloading weights
2025-01-11T14:14:47Z | INFO  | [ Initiating ] chunk_size=150M dest=/tmp/tmpqxac82aw/weights url=https://replicate.delivery/xezq/XVB9XywtSl6gGJUMaVQbJJGVrQPBIl2BpRgCMJ8edNNIpF9JA/trained_model.tar
2025-01-11T14:14:49Z | INFO  | [ Complete ] dest=/tmp/tmpqxac82aw/weights size="172 MB" total_elapsed=2.197s url=https://replicate.delivery/xezq/XVB9XywtSl6gGJUMaVQbJJGVrQPBIl2BpRgCMJ8edNNIpF9JA/trained_model.tar
Downloaded weights in 2.22s
2025-01-11 14:14:49.972 | INFO     | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/6e041a0ac8e9cb4a
2025-01-11 14:14:50.042 | INFO     | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded
2025-01-11 14:14:50.042 | DEBUG    | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2025-01-11 14:14:50.043 | DEBUG    | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA:   0%|          | 0/304 [00:00<?, ?it/s]
Applying LoRA:  93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 283/304 [00:00<00:00, 2820.22it/s]
Applying LoRA: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 304/304 [00:00<00:00, 2698.11it/s]
2025-01-11 14:14:50.155 | SUCCESS  | fp8.lora_loading:load_lora:539 - LoRA applied in 0.18s
Using seed: 9581
0it [00:00, ?it/s]
1it [00:00,  8.34it/s]
2it [00:00,  5.84it/s]
3it [00:00,  5.33it/s]
4it [00:00,  5.12it/s]
5it [00:00,  5.01it/s]
6it [00:01,  4.93it/s]
7it [00:01,  4.88it/s]
8it [00:01,  4.86it/s]
9it [00:01,  4.85it/s]
10it [00:01,  4.84it/s]
11it [00:02,  4.82it/s]
12it [00:02,  4.81it/s]
13it [00:02,  4.81it/s]
14it [00:02,  4.82it/s]
15it [00:03,  4.81it/s]
16it [00:03,  4.81it/s]
17it [00:03,  4.81it/s]
18it [00:03,  4.80it/s]
19it [00:03,  4.80it/s]
20it [00:04,  4.81it/s]
21it [00:04,  4.80it/s]
22it [00:04,  4.80it/s]
23it [00:04,  4.80it/s]
24it [00:04,  4.79it/s]
25it [00:05,  4.80it/s]
26it [00:05,  4.80it/s]
27it [00:05,  4.80it/s]
28it [00:05,  4.80it/s]
28it [00:05,  4.88it/s]
Total safe images: 1 out of 1
Version Details
Version ID
92fb0a0a2de2b7800629ed5c594f7d6d607db001350d45c237ef4e570378a77f
Version Created
December 12, 2024
Run on Replicate β†’