zedge/stable-diffusion 🔢📝✓ → ❓

▶️ 51.6M runs 📅 Oct 2022 ⚙️ Cog 0.16.12 🔗 GitHub
text-to-image

About

Private instance of stable-diffusion

Example Output

Prompt:

"Beautiful digital matte pastel paint sunflowers poppies chillwave greg rutkowski artstation"

Output

Example output

Performance Metrics

14.03s Prediction Time
136.38s Total Time
All Input Parameters
{
  "width": 512,
  "height": "896",
  "prompt": "Beautiful digital matte pastel paint sunflowers poppies chillwave greg rutkowski artstation",
  "num_outputs": 1,
  "guidance_scale": 7.5,
  "prompt_strength": 0.8,
  "num_inference_steps": 50
}
Input Parameters
seed Type: integerDefault: -1
Random seed (negative for random)
width Type: integerDefault: 1024
Width of output image
height Type: integerDefault: 1024
Height of output image
prompt Type: stringDefault: astronaut in a 90s college party, vhs photo
Prompt
padding Type: integerDefault: 0
Padding for the image after trimming
verbose Type: booleanDefault: false
Print detailed timing information
threshold Type: integerDefault: 80Range: 0 - 255
Threshold for transparency (0-255). Higher values make more pixels transparent.
warm_delay Type: integerDefault: -1
Parameter for warming the model. If set, returns empty dict after specified seconds
num_outputs Type: integerDefault: 1Range: 1 - 4
Number of images to output
safety_prompt Type: stringDefault: Analyze the provided image for hate content. Output "True" if the image contains any of the following: - Nazi symbols (e.g., swastika) - Symbols/propaganda associated with terrorist organizations - Graphic violence (explicit depictions of severe injury, gore, mutilation, or torture) - Content promoting suicide - Symbols/imagery related to White supremacist groups - Recognizable symbols/imagery promoting violent misogyny or anti-LGBTQ+ hate - Dehumanizing caricatures or propaganda targeting racial, ethnic, or religious groups - Prominent text within the image that clearly constitutes direct hate speech (e.g., slurs, calls for violence against protected groups) Otherwise, output "False".
Prompt for InternVL to check for NSFW content
stray_removal Type: numberDefault: 0.01Range: 0.001 - 0.3
Remove components smaller than this ratio of the largest component (0.01 = 1%, 0.1 = 10%, etc.)
trim_background Type: booleanDefault: false
Trim background transparency from the image after removing background
remove_background Type: booleanDefault: false
Remove background from the image
disable_nsfw_checker Type: booleanDefault: false
Disable safety checker for generated images.
Output Schema

Output

Type: object

Example Execution Logs
Using seed: 20725

0it [00:00, ?it/s]
1it [00:04,  4.36s/it]
2it [00:04,  1.86s/it]
3it [00:04,  1.07s/it]
4it [00:04,  1.42it/s]
5it [00:04,  2.02it/s]
6it [00:04,  2.72it/s]
7it [00:05,  3.45it/s]
8it [00:05,  4.14it/s]
9it [00:05,  4.84it/s]
10it [00:05,  5.56it/s]
11it [00:05,  6.00it/s]
12it [00:05,  6.47it/s]
13it [00:05,  6.87it/s]
14it [00:06,  6.98it/s]
15it [00:06,  7.06it/s]
16it [00:06,  7.46it/s]
17it [00:06,  7.51it/s]
18it [00:06,  7.63it/s]
19it [00:06,  7.65it/s]
20it [00:06,  7.50it/s]
21it [00:06,  7.74it/s]
22it [00:07,  7.75it/s]
23it [00:07,  7.65it/s]
24it [00:07,  7.50it/s]
25it [00:07,  7.61it/s]
26it [00:07,  7.55it/s]
27it [00:07,  7.75it/s]
28it [00:07,  7.65it/s]
29it [00:07,  7.77it/s]
30it [00:08,  7.83it/s]
31it [00:08,  7.84it/s]
32it [00:08,  7.60it/s]
33it [00:08,  7.72it/s]
34it [00:08,  7.68it/s]
35it [00:08,  7.62it/s]
36it [00:08,  8.01it/s]
37it [00:08,  7.93it/s]
38it [00:09,  7.78it/s]
39it [00:09,  7.82it/s]
40it [00:09,  7.81it/s]
41it [00:09,  7.35it/s]
42it [00:09,  7.75it/s]
43it [00:09,  7.72it/s]
44it [00:09,  7.71it/s]
45it [00:10,  7.66it/s]
46it [00:10,  7.69it/s]
47it [00:10,  7.67it/s]
48it [00:10,  7.74it/s]
49it [00:10,  7.77it/s]
50it [00:10,  7.70it/s]
50it [00:10,  4.68it/s]
NSFW content detected in 0 outputs, showing the rest 1 images...
Version Details
Version ID
69d39dcbb296da580994d867890ba2410d1fb6be9da9225d9bb48da2181594cf
Version Created
March 2, 2026
Run on Replicate →