🤖 Model 🖼️
meta/llama-guard-4-12b
Moderate and classify text and images for safety policy compliance. Accept a text prompt and optional multiple images, a...
Found 2 models (showing 1-2)
Moderate and classify text and images for safety policy compliance. Accept a text prompt and optional multiple images, a...
Detect prompt injection and jailbreak attempts in text inputs. Classify a string as benign, injection, and/or jailbreak...