kitaef/mytestmodel 🖼️ → 🖼️
About
The img2img pipeline that makes an anime-style image of a person. It uses one of sd1.5 models as a base, depth-estimation as a ControleNet and IPadapter model for face consistency.

Example Output
Output

Performance Metrics
18.18s
Prediction Time
136.75s
Total Time
Input Parameters
- input_image (required)
- An image with a person
Output Schema
Output
Example Execution Logs
0%| | 0/20 [00:00<?, ?it/s]/root/.pyenv/versions/3.11.9/lib/python3.11/site-packages/torch/nn/modules/conv.py:456: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.) return F.conv2d(input, weight, bias, self.stride, 5%|▌ | 1/20 [00:02<00:41, 2.19s/it] 10%|█ | 2/20 [00:02<00:20, 1.16s/it] 15%|█▌ | 3/20 [00:03<00:14, 1.20it/s] 20%|██ | 4/20 [00:03<00:10, 1.48it/s] 25%|██▌ | 5/20 [00:03<00:08, 1.70it/s] 30%|███ | 6/20 [00:04<00:07, 1.86it/s] 35%|███▌ | 7/20 [00:04<00:06, 1.98it/s] 40%|████ | 8/20 [00:05<00:05, 2.07it/s] 45%|████▌ | 9/20 [00:05<00:05, 2.14it/s] 50%|█████ | 10/20 [00:06<00:04, 2.18it/s] 55%|█████▌ | 11/20 [00:06<00:04, 2.21it/s] 60%|██████ | 12/20 [00:06<00:03, 2.24it/s] 65%|██████▌ | 13/20 [00:07<00:03, 2.25it/s] 70%|███████ | 14/20 [00:07<00:02, 2.26it/s] 75%|███████▌ | 15/20 [00:08<00:02, 2.27it/s] 80%|████████ | 16/20 [00:08<00:01, 2.28it/s] 85%|████████▌ | 17/20 [00:09<00:01, 2.28it/s] 90%|█████████ | 18/20 [00:09<00:00, 2.28it/s] 95%|█████████▌| 19/20 [00:10<00:00, 2.29it/s] 100%|██████████| 20/20 [00:10<00:00, 2.29it/s] 100%|██████████| 20/20 [00:10<00:00, 1.91it/s]
Version Details
- Version ID
a0ebe82aad0744fbc8f6964143760ed306af3864daa4b62a776b656636c1f191
- Version Created
- May 5, 2024