aaronhayes/sam2-infill-anything 🔢🖼️📝❓ → 🖼️
About
Inpaint anything with automatic mask generation
Example Output
Output
Performance Metrics
10.61s
Prediction Time
42.09s
Total Time
All Input Parameters
{
"cfg": 8,
"image": "https://replicate.delivery/pbxt/MSDtQ6SQcoBe7skJ2iINnCISioSfSAoe7OyrVaUzkuI47a5Q/image.png",
"steps": 20,
"denoise": 0.9,
"mask_prompt": "rabbit",
"infill_prompt": "A small cute baby grizzly bear",
"output_format": "jpg",
"mask_threshold": 0.5,
"output_quality": 95,
"infill_negative_prompt": "deformed, distorted, blurry, bad light, extra buildings, extra structures, buildings, overexposed, oversaturated, fake, animated, cartoon"
}
Input Parameters
- cfg
- Classifier-free guidance scale balances creativity and prompt adherence. Higher values results in images matching the prompt more closely.
- seed
- Set a seed for reproducibility. Random by default.
- image (required)
- Input image
- steps
- Interference steps
- denoise
- The amount of denoise applied, lower values will maintain the structure of the input image
- mask_prompt (required)
- Prompt for SAM2 mask generation
- infill_prompt (required)
- Prompt to infill image
- output_format
- Format of the output images
- mask_threshold
- Threshold for mask generation, higher values lead to more restrictive masks
- output_quality
- Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
- infill_negative_prompt
- Negative prompt for infill image
Output Schema
Output
Example Execution Logs
Random seed set to: 849027535
Checking inputs
✅ /tmp/inputs/image.png
====================================
Checking weights
Checking if juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints
✅ juggernautXLInpainting_xiInpainting.safetensors exists in ComfyUI/models/checkpoints
Skipping sam2_1_hiera_base_plus.pt as weights are bundled in cog
Checking if 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models
✅ 4x-UltraSharp.pth exists in ComfyUI/models/upscale_models
====================================
Running workflow
[ComfyUI] got prompt
Executing node 21, title: Load Upscale Model, class type: UpscaleModelLoader
Executing node 13, title: Load Checkpoint, class type: CheckpointLoaderSimple
[ComfyUI] model weight dtype torch.float16, manual cast: None
[ComfyUI] model_type EPS
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] Using pytorch attention in VAE
[ComfyUI] VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[ComfyUI] CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Executing node 1, title: Load Image, class type: LoadImage
Executing node 2, title: Image Scale Down To Size, class type: easy imageScaleDownToSize
Executing node 3, title: 🔧 Get Image Size, class type: GetImageSize+
Executing node 4, title: Resize Image, class type: ImageResizeKJ
Executing node 10, title: GroundingDinoModelLoader (segment anything2), class type: GroundingDinoModelLoader (segment anything2)
Executing node 9, title: SAM2ModelLoader (segment anything2), class type: SAM2ModelLoader (segment anything2)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.)
[ComfyUI] return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
[ComfyUI] Loaded checkpoint sucessfully
Executing node 11, title: GroundingDinoSAM2Segment (segment anything2), class type: GroundingDinoSAM2Segment (segment anything2)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:632: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
[ComfyUI] return fn(*args, **kwargs)
[ComfyUI] /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
[ComfyUI] warnings.warn(
[ComfyUI] For numpy array image, we assume (HxWxC) format
[ComfyUI] Computing image embeddings for the provided image...
[ComfyUI] Image embeddings computed.
Executing node 12, title: GrowMask, class type: GrowMask
Executing node 16, title: Negative Prompt, class type: CLIPTextEncode
[ComfyUI] Requested to load SDXLClipModel
[ComfyUI] loaded completely 9.5367431640625e+25 1560.802734375 True
Executing node 15, title: Prompt, class type: CLIPTextEncode
Executing node 17, title: InpaintModelConditioning, class type: InpaintModelConditioning
[ComfyUI] Requested to load AutoencoderKL
[ComfyUI] loaded completely 9.5367431640625e+25 159.55708122253418 True
Executing node 14, title: Differential Diffusion, class type: DifferentialDiffusion
Executing node 18, title: KSampler, class type: KSampler
[ComfyUI] Requested to load SDXL
[ComfyUI] loaded completely 9.5367431640625e+25 4897.075813293457 True
[ComfyUI]
[ComfyUI] [34m[ComfyUI-Easy-Use] server: [0mv1.2.7 [92mLoaded[0m
[ComfyUI] [34m[ComfyUI-Easy-Use] web root: [0m/src/ComfyUI/custom_nodes/ComfyUI-Easy-Use/web_version/v2 [92mLoaded[0m
[ComfyUI] grounding-dino is using models/bert-base-uncased
[ComfyUI] final text_encoder_type: /src/ComfyUI/models/bert-base-uncased
[ComfyUI] scores: [[0.9872246]]
[ComfyUI] 0%| | 0/20 [00:00<?, ?it/s]
[ComfyUI] 5%|▌ | 1/20 [00:00<00:02, 6.37it/s]
[ComfyUI] 15%|█▌ | 3/20 [00:00<00:01, 12.30it/s]
[ComfyUI] 25%|██▌ | 5/20 [00:00<00:01, 14.28it/s]
[ComfyUI] 35%|███▌ | 7/20 [00:00<00:00, 15.32it/s]
[ComfyUI] 45%|████▌ | 9/20 [00:00<00:00, 15.90it/s]
[ComfyUI] 55%|█████▌ | 11/20 [00:00<00:00, 16.21it/s]
[ComfyUI] 65%|██████▌ | 13/20 [00:00<00:00, 16.44it/s]
[ComfyUI] 75%|███████▌ | 15/20 [00:00<00:00, 16.64it/s]
[ComfyUI] 85%|████████▌ | 17/20 [00:01<00:00, 16.73it/s]
[ComfyUI] 95%|█████████▌| 19/20 [00:01<00:00, 16.69it/s]
Executing node 19, title: VAE Decode, class type: VAEDecode
Executing node 20, title: Upscale Image (using Model), class type: ImageUpscaleWithModel
Executing node 23, title: Save Image, class type: SaveImage
[ComfyUI] 100%|██████████| 20/20 [00:01<00:00, 15.72it/s]
[ComfyUI] Prompt executed in 9.83 seconds
outputs: {'23': {'images': [{'filename': 'output_00001_.png', 'subfolder': '', 'type': 'output'}]}}
====================================
output_00001_.png
Version Details
- Version ID
622920c3362cfc010e32ff85f98b2f7c4bc36b498e540386b9a52722c3e32f6d- Version Created
- January 21, 2025