🤖 Model fofr/prompt-classifier Classify text prompts for text-to-image systems and return a safety score from 0 (safe) to 10 (very NSFW/toxic). Accepts... text-safety • text-nsfw-detection • toxicity-detection • 1.9M runs