drdanieltsang/3dcartoonstyle 🖼️🔢❓📝✓ → 🖼️

▶️ 444 runs 📅 Jan 2025 ⚙️ Cog 0.11.1
3d-cartoon image-inpainting image-to-image lora text-to-image

About

Example Output

Prompt:

"3dcartoon style 3D cartoon danielctsang wearing a black t-shirt, big eyes, blue jeans, hands on his face, uniformly short shaved hair, surprised look, cinematic lighting, Pixar-like shading, standing in a living room looking at a new big screen tv"

Output

Example output

Performance Metrics

23.17s Prediction Time
23.18s Total Time
All Input Parameters
{
  "model": "dev",
  "prompt": "3dcartoon style 3D cartoon danielctsang wearing a black t-shirt, big eyes, blue jeans, hands on his face, uniformly short shaved hair, surprised look, cinematic lighting, Pixar-like shading, standing in a living room looking at a new big screen tv",
  "go_fast": false,
  "extra_lora": "huggingface.co/dantsang/danielctsang5",
  "lora_scale": 1.01,
  "megapixels": "1",
  "num_outputs": 1,
  "aspect_ratio": "1:1",
  "output_format": "webp",
  "guidance_scale": 3,
  "output_quality": 80,
  "prompt_strength": 0.8,
  "extra_lora_scale": 1.1,
  "num_inference_steps": 28
}
Input Parameters
mask Type: string
Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
seed Type: integer
Random seed. Set for reproducible generation
image Type: string
Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
model Default: dev
Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
width Type: integerRange: 256 - 1440
Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
height Type: integerRange: 256 - 1440
Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
prompt (required) Type: string
Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
go_fast Type: booleanDefault: false
Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
extra_lora Type: string
Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
lora_scale Type: numberDefault: 1Range: -1 - 3
Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
megapixels Default: 1
Approximate number of megapixels for generated image
num_outputs Type: integerDefault: 1Range: 1 - 4
Number of outputs to generate
aspect_ratio Default: 1:1
Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
output_format Default: webp
Format of the output images
guidance_scale Type: numberDefault: 3Range: 0 - 10
Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
output_quality Type: integerDefault: 80Range: 0 - 100
Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
prompt_strength Type: numberDefault: 0.8Range: 0 - 1
Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
extra_lora_scale Type: numberDefault: 1Range: -1 - 3
Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
replicate_weights Type: string
Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
num_inference_steps Type: integerDefault: 28Range: 1 - 50
Number of denoising steps. More steps can give more detailed images, but take longer.
disable_safety_checker Type: booleanDefault: false
Disable safety checker for generated images.
Output Schema

Output

Type: arrayItems Type: stringItems Format: uri

Example Execution Logs
2025-01-21 21:55:43.117 | DEBUG    | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2025-01-21 21:55:43.118 | DEBUG    | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA:   0%|          | 0/304 [00:00<?, ?it/s]
Applying LoRA:  92%|█████████▏| 280/304 [00:00<00:00, 2799.50it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2647.41it/s]
2025-01-21 21:55:43.233 | SUCCESS  | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s
free=29001128423424
Downloading weights
2025-01-21T21:55:43Z | INFO  | [ Initiating ] chunk_size=150M dest=/tmp/tmpekzfues4/weights url=https://replicate.delivery/xezq/0LsjUNIDZvowONFm2AmgkyFpI0VPONbAdIU3tM7MGEjOu2BF/trained_model.tar
2025-01-21T21:55:45Z | INFO  | [ Complete ] dest=/tmp/tmpekzfues4/weights size="172 MB" total_elapsed=1.755s url=https://replicate.delivery/xezq/0LsjUNIDZvowONFm2AmgkyFpI0VPONbAdIU3tM7MGEjOu2BF/trained_model.tar
Downloaded weights in 1.79s
free=29000965058560
Downloading weights
2025-01-21T21:55:45Z | INFO  | [ Initiating ] chunk_size=150M dest=/src/weights-cache/6897b2378eb833dc url=https://huggingface.co/dantsang/danielctsang5/resolve/main/lora.safetensors
2025-01-21T21:55:45Z | INFO  | [ Redirect ] redirect_url=https://cdn-lfs-us-1.hf.co/repos/17/1f/171f57c5f3495dac44e38ff1dac39d788389bf84bc602fedc181f7d9638098bf/8aeb2821010aa7aff66409aaf17083fd380f1128776001b0860ec210d2b98a14?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27lora.safetensors%3B+filename%3D%22lora.safetensors%22%3B&Expires=1737500145&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczNzUwMDE0NX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zLzE3LzFmLzE3MWY1N2M1ZjM0OTVkYWM0NGUzOGZmMWRhYzM5ZDc4ODM4OWJmODRiYzYwMmZlZGMxODFmN2Q5NjM4MDk4YmYvOGFlYjI4MjEwMTBhYTdhZmY2NjQwOWFhZjE3MDgzZmQzODBmMTEyODc3NjAwMWIwODYwZWMyMTBkMmI5OGExND9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=vL8Mske4e9iw-EbBBWHJ6Z43-5h4F%7EwlvtFn3y8S5%7EnYWjFj0nqA8gK25ztkCKWcr%7E0gG6eqt7an-5Q4vo-U7mEia1FMqx2qeKwyogiJZlLC-dw3aLgMJ1bd6riFSa9G-d1jFUNHBxvjsiILikqSrWomXjgDkOhz8ErSt%7EkHwCJ-pXy-WSNpOu4wep%7EukkEszhrJ8h0kll-yYuIdHSQsd%7EozY3FjnQH5aNIJXFsAn%7EoUztcaaBS08m%7EbC1xU7xamk46dnhVdXdp200vNW6Ch7PaqcJDyT9%7EDlxF2jPMXZNpbErGmDueNgXZA7SsR4T4X73ysU4WPYGn0JpYbNbi%7E4g__&Key-Pair-Id=K24J24Z295AEI9 url=https://huggingface.co/dantsang/danielctsang5/resolve/main/lora.safetensors
2025-01-21T21:55:59Z | INFO  | [ Complete ] dest=/src/weights-cache/6897b2378eb833dc size="172 MB" total_elapsed=14.554s url=https://huggingface.co/dantsang/danielctsang5/resolve/main/lora.safetensors
Downloaded weights in 14.58s
2025-01-21 21:55:59.713 | INFO     | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/b2d2fcc50e7db039
2025-01-21 21:55:59.784 | INFO     | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded
2025-01-21 21:55:59.785 | DEBUG    | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2025-01-21 21:55:59.785 | DEBUG    | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA:   0%|          | 0/304 [00:00<?, ?it/s]
Applying LoRA:  93%|█████████▎| 284/304 [00:00<00:00, 2835.35it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2676.07it/s]
2025-01-21 21:55:59.899 | SUCCESS  | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s
2025-01-21 21:55:59.899 | INFO     | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/6897b2378eb833dc
2025-01-21 21:56:00.015 | INFO     | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded
2025-01-21 21:56:00.015 | DEBUG    | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys
2025-01-21 21:56:00.016 | DEBUG    | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted
Applying LoRA:   0%|          | 0/304 [00:00<?, ?it/s]
Applying LoRA:  93%|█████████▎| 284/304 [00:00<00:00, 2833.83it/s]
Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2674.85it/s]
2025-01-21 21:56:00.130 | SUCCESS  | fp8.lora_loading:load_lora:539 - LoRA applied in 0.23s
Using seed: 33677
0it [00:00, ?it/s]
1it [00:00,  8.34it/s]
2it [00:00,  5.82it/s]
3it [00:00,  5.30it/s]
4it [00:00,  5.10it/s]
5it [00:00,  4.95it/s]
6it [00:01,  4.86it/s]
7it [00:01,  4.82it/s]
8it [00:01,  4.81it/s]
9it [00:01,  4.80it/s]
10it [00:02,  4.77it/s]
11it [00:02,  4.77it/s]
12it [00:02,  4.77it/s]
13it [00:02,  4.78it/s]
14it [00:02,  4.77it/s]
15it [00:03,  4.75it/s]
16it [00:03,  4.74it/s]
17it [00:03,  4.74it/s]
18it [00:03,  4.74it/s]
19it [00:03,  4.75it/s]
20it [00:04,  4.74it/s]
21it [00:04,  4.74it/s]
22it [00:04,  4.75it/s]
23it [00:04,  4.75it/s]
24it [00:04,  4.75it/s]
25it [00:05,  4.75it/s]
26it [00:05,  4.75it/s]
27it [00:05,  4.76it/s]
28it [00:05,  4.75it/s]
28it [00:05,  4.83it/s]
Total safe images: 1 out of 1
Version Details
Version ID
21172c96db5a714385ef66c466d1a51997b63ddd5ce8e7c26116bb96d8ad443b
Version Created
January 21, 2025
Run on Replicate →