🤖 Model

fofr/prompt-classifier
Score text prompts for NSFW/toxicity on a 0–10 safety scale. Accept a text prompt (e.g., for text-to-image systems) and...
Found 2 models (showing 1-2)
Score text prompts for NSFW/toxicity on a 0–10 safety scale. Accept a text prompt (e.g., for text-to-image systems) and...
Detect hate speech and toxic language in text. Accepts a text string and returns probability scores for toxicity, severe...