dribnet/pixray-api 📝 → 🖼️
About
Uses pixray with raw settings.
Example Output
Output
[object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object]
Performance Metrics
365.87s
Total Time
Input Parameters
- settings
- yaml settings
Output Schema
Output
Example Execution Logs
---> BasePixrayPredictor Predict Found 8 colors in https://lospec.com/palette-list/cl8uds-32x.png Using seed: 392990954 Working with z of shape (1, 256, 16, 16) = 65536 dimensions. loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth VQLPIPSWithDiscriminator running with hinge loss. Restored from models/vqgan_imagenet_f16_16384.ckpt color table has 8 entries like [[0.6470588235294118, 0.7176470588235294, 0.8313725490196079], [0.9882352941176471, 0.6901960784313725, 0.5490196078431373], [0.6039215686274509, 0.6705882352941176, 0.788235294117647], [0.5607843137254902, 0.6274509803921569, 0.7490196078431373], [0.9372549019607843, 0.615686274509804, 0.4980392156862745]] Using device: cuda:0 Optimising using: Adam Using text prompts: ['robots at sunset'] 0it [00:00, ?it/s] /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3631: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn( /root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] iter: 0, loss: 3.4, losses: 0.395, 0.994, 0.0795, 0.915, 0.047, 0.916, 0.0481 (-0=>3.395) 0it [00:00, ?it/s] iter: 10, loss: 2.84, losses: 0.0314, 0.945, 0.0797, 0.85, 0.0455, 0.839, 0.047 (-0=>2.838) 0it [00:11, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 20, loss: 2.8, losses: 0.0259, 0.935, 0.0811, 0.838, 0.0445, 0.831, 0.0458 (-0=>2.802) 0it [00:00, ?it/s] iter: 30, loss: 2.79, losses: 0.0302, 0.931, 0.0805, 0.835, 0.0446, 0.823, 0.0454 (-2=>2.789) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 40, loss: 2.78, losses: 0.031, 0.926, 0.081, 0.83, 0.0443, 0.82, 0.0458 (-3=>2.76) 0it [00:00, ?it/s] iter: 50, loss: 2.76, losses: 0.0377, 0.918, 0.0814, 0.825, 0.0437, 0.814, 0.0452 (-13=>2.76) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 60, loss: 2.75, losses: 0.0688, 0.9, 0.0821, 0.813, 0.0428, 0.801, 0.0451 (-6=>2.732) 0it [00:00, ?it/s] iter: 70, loss: 2.7, losses: 0.0738, 0.879, 0.0798, 0.798, 0.0439, 0.777, 0.0456 (-0=>2.697) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 80, loss: 2.75, losses: 0.138, 0.877, 0.0817, 0.791, 0.0444, 0.776, 0.0466 (-10=>2.697) 0it [00:00, ?it/s] iter: 90, loss: 2.76, losses: 0.128, 0.893, 0.0822, 0.795, 0.044, 0.772, 0.0458 (-20=>2.697) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 100, loss: 2.74, losses: 0.143, 0.88, 0.0824, 0.783, 0.0459, 0.759, 0.0481 (-6=>2.672) 0it [00:00, ?it/s] iter: 110, loss: 2.71, losses: 0.116, 0.874, 0.0821, 0.782, 0.0443, 0.762, 0.047 (-16=>2.672) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 120, loss: 2.72, losses: 0.147, 0.862, 0.0833, 0.778, 0.0459, 0.755, 0.0483 (-4=>2.661) 0it [00:00, ?it/s] iter: 130, loss: 2.65, losses: 0.152, 0.843, 0.0834, 0.744, 0.0468, 0.734, 0.0485 (-6=>2.652) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 140, loss: 2.69, losses: 0.145, 0.852, 0.0822, 0.767, 0.0457, 0.748, 0.0485 (-6=>2.633) 0it [00:00, ?it/s] iter: 150, loss: 2.68, losses: 0.173, 0.849, 0.0832, 0.749, 0.046, 0.738, 0.047 (-8=>2.628) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 160, loss: 2.73, losses: 0.17, 0.859, 0.0837, 0.773, 0.0461, 0.751, 0.0474 (-18=>2.628) 0it [00:00, ?it/s] iter: 170, loss: 2.73, losses: 0.166, 0.87, 0.0834, 0.765, 0.0466, 0.756, 0.0472 (-28=>2.628) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 180, loss: 2.68, losses: 0.164, 0.848, 0.0844, 0.744, 0.0476, 0.746, 0.0485 (-38=>2.628) 0it [00:00, ?it/s] iter: 190, loss: 2.6, losses: 0.156, 0.818, 0.0838, 0.727, 0.0475, 0.72, 0.0484 (-0=>2.6) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 200, loss: 2.7, losses: 0.177, 0.851, 0.0837, 0.752, 0.047, 0.739, 0.0484 (-10=>2.6) 0it [00:00, ?it/s] iter: 210, loss: 2.7, losses: 0.175, 0.85, 0.0843, 0.751, 0.0473, 0.746, 0.0487 (-20=>2.6) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 220, loss: 2.7, losses: 0.166, 0.858, 0.0853, 0.751, 0.0464, 0.743, 0.0481 (-30=>2.6) 0it [00:00, ?it/s] Dropping learning rate iter: 230, loss: 2.6, losses: 0.153, 0.824, 0.0858, 0.722, 0.048, 0.717, 0.0488 (-0=>2.599) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 240, loss: 2.65, losses: 0.149, 0.843, 0.0851, 0.74, 0.0471, 0.738, 0.0493 (-1=>2.574) 0it [00:00, ?it/s] iter: 250, loss: 2.65, losses: 0.151, 0.842, 0.0842, 0.744, 0.0474, 0.736, 0.0486 (-11=>2.574) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 260, loss: 2.64, losses: 0.154, 0.836, 0.0851, 0.737, 0.0476, 0.731, 0.0481 (-21=>2.574) 0it [00:00, ?it/s] iter: 270, loss: 2.65, losses: 0.145, 0.843, 0.0853, 0.742, 0.047, 0.735, 0.0482 (-4=>2.562) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 280, loss: 2.61, losses: 0.16, 0.825, 0.0853, 0.721, 0.049, 0.716, 0.049 (-14=>2.562) 0it [00:00, ?it/s] iter: 290, loss: 2.64, losses: 0.155, 0.835, 0.0849, 0.735, 0.0479, 0.734, 0.0484 (-24=>2.562) 0it [00:12, ?it/s] 0it [00:22, ?it/s] 0it [00:00, ?it/s] iter: 300, finished (-34=>2.562) 0it [00:00, ?it/s] 0it [00:00, ?it/s]
Version Details
- Version ID
01c87158fa585a5924b31332dcffbb0e488466a5ca556ebdb2da130ccf3ddeb8- Version Created
- October 27, 2022