🤖 Model
meta/llama-guard-4-12b
Moderate text prompts, model responses, and images for safety compliance. Accepts text with optional multiple images and...
Found 2 models (showing 1-2)
Moderate text prompts, model responses, and images for safety compliance. Accepts text with optional multiple images and...
Detect prompt injection and jailbreak attempts in text inputs. Classify a string as benign, injection, and/or jailbreak...