🤖 Model
fofr/prompt-classifier
Classify text prompts for text-to-image systems and return a safety score from 0 (safe) to 10 (very NSFW/toxic). Accepts...
Found 2 models (showing 1-2)
Classify text prompts for text-to-image systems and return a safety score from 0 (safe) to 10 (very NSFW/toxic). Accepts...
Detect hate speech and toxic content in text. Accepts a text string and returns JSON scores for toxicity, severe_toxicit...