fofr/style-transfer 🔢❓📝🖼️ → 🖼️
About
Transfer the style of one image to another

Example Output
Prompt:
"An astronaut riding a unicorn"
Output

Performance Metrics
5.90s
Prediction Time
5.91s
Total Time
All Input Parameters
{ "model": "fast", "width": 1024, "height": 1024, "prompt": "An astronaut riding a unicorn", "style_image": "https://replicate.delivery/pbxt/KlTqluRakBzt7N5mm1WExEQCc4J3usa7E3n5dhttcayTqFRm/van-gogh.jpeg", "output_format": "webp", "output_quality": 80, "negative_prompt": "", "number_of_images": 1, "structure_depth_strength": 1, "structure_denoising_strength": 0.65 }
Input Parameters
- seed
- Set a seed for reproducibility. Random by default.
- model
- Model to use for the generation
- width
- Width of the output image (ignored if structure image given)
- height
- Height of the output image (ignored if structure image given)
- prompt
- Prompt for the image
- style_image (required)
- Copy the style from this image
- output_format
- Format of the output images
- output_quality
- Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.
- negative_prompt
- Things you do not want to see in your image
- structure_image
- An optional image to copy structure from. Output images will use the same aspect ratio.
- number_of_images
- Number of images to generate
- structure_depth_strength
- Strength of the depth controlnet
- structure_denoising_strength
- How much of the original image (and colors) to preserve (0 is all, 1 is none, 0.65 is a good balance)
Output Schema
Output
Example Execution Logs
Random seed set to: 1640868803 Checking weights Including weights for IPAdapter preset: PLUS (high strength) ✅ ip-adapter-plus_sdxl_vit-h.safetensors ✅ CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors ✅ dreamshaperXL_lightningDPMSDE.safetensors ==================================== Running workflow got prompt Executing node 2, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] loaded straight to GPU Requested to load SDXL Loading 1 new model Executing node 1, title: IPAdapter Unified Loader, class type: IPAdapterUnifiedLoader [33mINFO: Clip Vision model loaded from /src/ComfyUI/models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors[0m [33mINFO: IPAdapter model loaded from /src/ComfyUI/models/ipadapter/ip-adapter-plus_sdxl_vit-h.safetensors[0m Executing node 5, title: Load Image, class type: LoadImage [33mINFO: the IPAdapter reference image is not a square, CLIPImageProcessor will resize and crop it at the center. If the main focus of the picture is not in the middle the result might not be what you are expecting.[0m Executing node 4, title: IPAdapter, class type: IPAdapter Requested to load CLIPVisionModelProjection Loading 1 new model Executing node 6, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Requested to load SDXLClipModel Loading 1 new model Executing node 7, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 10, title: Empty Latent Image, class type: EmptyLatentImage Executing node 3, title: KSampler, class type: KSampler Requested to load SDXL Loading 1 new model 0%| | 0/4 [00:00<?, ?it/s]/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643. warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.") 25%|██▌ | 1/4 [00:00<00:01, 2.99it/s] 50%|█████ | 2/4 [00:00<00:00, 3.64it/s] 75%|███████▌ | 3/4 [00:00<00:00, 3.94it/s] 100%|██████████| 4/4 [00:00<00:00, 5.05it/s] 100%|██████████| 4/4 [00:00<00:00, 4.40it/s] Requested to load AutoencoderKL Loading 1 new model Executing node 8, title: VAE Decode, class type: VAEDecode Executing node 9, title: Save Image, class type: SaveImage Prompt executed in 5.15 seconds outputs: {'9': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== ComfyUI_00001_.png
Version Details
- Version ID
f1023890703bc0a5a3a2c21b5e498833be5f6ef6e70e9daf6b9b3a4fd8309cf0
- Version Created
- April 19, 2024