meta/llama-guard-4-12b
Moderate and classify text and images for safety policy compliance. Accept a text prompt and optional multiple images, a...
Found 6 models (showing 1-6)
Moderate and classify text and images for safety policy compliance. Accept a text prompt and optional multiple images, a...
Moderate LLM prompts and responses for safety policy compliance. Accepts a user prompt and/or an assistant reply as text...
Moderate text prompts and assistant responses for safety and policy compliance. Accepts a user message (prompt) and/or a...
Classify text prompts and assistant responses for safety. Accepts a user message and/or assistant reply and outputs a sa...
Classify the safety of multimodal inputs (image and user message) for content moderation. Accepts an image (required) an...
Classify text against custom safety policies with rationale. Accepts a plain‑English policy and a text input, and return...