dribnet/pixray-vqgan ❓📝 → 🖼️

▶️ 87.4K runs 📅 Oct 2021 ⚙️ Cog 0.4.4 🔗 GitHub ⚖️ License
text-to-image vqgan vqgan-clip

About

Example Output

Output

[object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object][object Object]

Performance Metrics

478.73s Total Time
All Input Parameters
{
  "aspect": "widescreen",
  "prompts": "Squid Game by Hwang Dong-hyuk",
  "quality": "better"
}
Input Parameters
aspect Default: widescreen
wide vs square
prompts Type: stringDefault: rainbow mountain
text prompt
quality Default: normal
better is slower
Output Schema

Output

Type: arrayItems Type: stringItems Format: uri

Example Execution Logs
---> BasePixrayPredictor Predict
Using seed:
11092165953890664426
Working with z of shape (1, 256, 16, 16) = 65536 dimensions.
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth
VQLPIPSWithDiscriminator running with hinge loss.
Restored from models/vqgan_imagenet_f16_16384.ckpt
Using device:
cuda:0
Optimising using:
Adam
Using text prompts:
['Squid Game by Hwang Dong-hyuk']

0it [00:00, ?it/s]
/root/.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/functional.py:3609: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  warnings.warn(
iter: 0, loss: 3.03, losses: 0.997, 0.0773, 0.924, 0.0466, 0.943, 0.047 (-0=>3.034)

0it [00:00, ?it/s]

0it [00:10, ?it/s]

0it [00:00, ?it/s]
iter: 10, loss: 2.9, losses: 0.954, 0.0795, 0.884, 0.0474, 0.885, 0.0473 (-0=>2.897)

0it [00:01, ?it/s]

0it [00:10, ?it/s]

0it [00:00, ?it/s]
iter: 20, loss: 2.86, losses: 0.943, 0.0803, 0.874, 0.0464, 0.864, 0.0476 (-2=>2.8)

0it [00:00, ?it/s]

0it [00:10, ?it/s]

0it [00:00, ?it/s]
iter: 30, loss: 2.76, losses: 0.913, 0.0803, 0.847, 0.0468, 0.823, 0.047 (-0=>2.758)

0it [00:00, ?it/s]

0it [00:10, ?it/s]

0it [00:00, ?it/s]
iter: 40, loss: 2.71, losses: 0.9, 0.0828, 0.826, 0.0488, 0.805, 0.0491 (-4=>2.674)

0it [00:00, ?it/s]

0it [00:10, ?it/s]

0it [00:00, ?it/s]
iter: 50, loss: 2.66, losses: 0.882, 0.0817, 0.811, 0.0498, 0.786, 0.0496 (-3=>2.603)

0it [00:00, ?it/s]

0it [00:10, ?it/s]

0it [00:00, ?it/s]
iter: 60, loss: 2.53, losses: 0.835, 0.0817, 0.78, 0.0497, 0.736, 0.0514 (-5=>2.53)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 70, loss: 2.59, losses: 0.863, 0.0827, 0.788, 0.0506, 0.759, 0.0509 (-1=>2.498)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 80, loss: 2.55, losses: 0.845, 0.0813, 0.777, 0.0517, 0.74, 0.0507 (-2=>2.458)

0it [00:00, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 90, loss: 2.52, losses: 0.826, 0.0818, 0.774, 0.0525, 0.738, 0.0511 (-7=>2.434)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 100, loss: 2.41, losses: 0.786, 0.0817, 0.745, 0.0558, 0.689, 0.0543 (-3=>2.4)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 110, loss: 2.5, losses: 0.824, 0.0847, 0.762, 0.0541, 0.721, 0.0536 (-7=>2.381)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 120, loss: 2.49, losses: 0.826, 0.0828, 0.75, 0.0542, 0.723, 0.0526 (-7=>2.371)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 130, loss: 2.45, losses: 0.8, 0.0835, 0.752, 0.0553, 0.708, 0.0542 (-9=>2.363)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 140, loss: 2.48, losses: 0.82, 0.0836, 0.752, 0.0553, 0.715, 0.0531 (-3=>2.328)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 150, loss: 2.44, losses: 0.81, 0.0828, 0.739, 0.0559, 0.703, 0.0527 (-13=>2.328)

0it [00:00, ?it/s]
Caught SIGTERM, exiting...

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 160, loss: 2.46, losses: 0.812, 0.0827, 0.753, 0.0546, 0.705, 0.0534 (-7=>2.313)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 170, loss: 2.32, losses: 0.747, 0.0855, 0.712, 0.0596, 0.659, 0.0564 (-1=>2.292)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 180, loss: 2.42, losses: 0.8, 0.0828, 0.737, 0.0559, 0.689, 0.0546 (-11=>2.292)

0it [00:00, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 190, loss: 2.42, losses: 0.799, 0.0843, 0.739, 0.0549, 0.688, 0.0542 (-21=>2.292)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 200, loss: 2.39, losses: 0.795, 0.0851, 0.731, 0.0558, 0.666, 0.0545 (-4=>2.281)

0it [00:00, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 210, loss: 2.4, losses: 0.786, 0.0847, 0.735, 0.0565, 0.683, 0.0557 (-14=>2.281)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 220, loss: 2.27, losses: 0.743, 0.0863, 0.698, 0.06, 0.627, 0.0593 (-5=>2.254)

0it [00:01, ?it/s]
Dropping learning rate

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 230, loss: 2.33, losses: 0.765, 0.0865, 0.712, 0.0588, 0.652, 0.0568 (-1=>2.221)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 240, loss: 2.35, losses: 0.774, 0.0837, 0.725, 0.0571, 0.66, 0.0548 (-11=>2.221)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 250, loss: 2.32, losses: 0.765, 0.0853, 0.706, 0.0591, 0.651, 0.0567 (-3=>2.203)

0it [00:00, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 260, loss: 2.34, losses: 0.765, 0.0862, 0.713, 0.059, 0.66, 0.057 (-9=>2.2)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 270, loss: 2.39, losses: 0.788, 0.0844, 0.729, 0.0561, 0.674, 0.056 (-4=>2.197)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 280, loss: 2.22, losses: 0.711, 0.086, 0.688, 0.0611, 0.609, 0.0604 (-9=>2.188)

0it [00:00, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 290, loss: 2.33, losses: 0.773, 0.0856, 0.708, 0.0592, 0.648, 0.0574 (-5=>2.186)

0it [00:01, ?it/s]

0it [00:11, ?it/s]

0it [00:00, ?it/s]
iter: 300, finished (-15=>2.186)

0it [00:00, ?it/s]

0it [00:00, ?it/s]
Version Details
Version ID
263c3e9874df0f13a87f3324769aaf7a4cdeb9a560a521761ad7034b14da9e05
Version Created
October 27, 2022
Run on Replicate →