🤖 Model
meta/meta-llama-guard-2-8b
Moderate LLM prompts and responses for safety policy compliance. Accepts a user prompt and/or an assistant reply as text...
Found 2 models (showing 1-2)
Moderate LLM prompts and responses for safety policy compliance. Accepts a user prompt and/or an assistant reply as text...
Classify text prompts for text-to-image systems and return a safety score from 0 (safe) to 10 (very NSFW/toxic). Accepts...