🤖 Model

meta/llama-guard-4-12b
Classify text prompts, model responses, and multiple images for safety policy compliance. Accepts text and a list of ima...
Found 4 models (showing 1-4)
Classify text prompts, model responses, and multiple images for safety policy compliance. Accepts text and a list of ima...
Moderate LLM conversations by classifying user prompts and assistant responses as SAFE or UNSAFE and listing violated po...
Moderate text prompts and assistant responses for safety policy compliance. Accepts a user prompt and/or an assistant me...
Classify text for safety policy compliance. Takes a user prompt and/or an assistant response and returns a safe/unsafe l...