🤖 Model meta/meta-llama-guard-2-8b Moderate LLM conversations by classifying user prompts and assistant responses as SAFE or UNSAFE and listing violated po... content-moderation • text-classification • guardrails • 734.9K runs