meta/llama-guard-4-12b 🔢📝🖼️ → 📝
About

Example Output
Prompt:
"one fish two fish red fish blue fish"
Output
safe
Performance Metrics
0.28s
Prediction Time
8.79s
Total Time
All Input Parameters
{ "top_p": 1, "prompt": "one fish two fish red fish blue fish", "image_input": [], "temperature": 1, "presence_penalty": 0, "frequency_penalty": 0, "max_completion_tokens": 512 }
Input Parameters
- top_p
- Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.)
- prompt
- The prompt to send to the model. Do not use if using messages.
- image_input
- List of images to send to the model
- temperature
- Sampling temperature between 0 and 2
- system_prompt
- System prompt to set the assistant's behavior
- presence_penalty
- Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
- frequency_penalty
- Frequency penalty parameter - positive values penalize the repetition of tokens.
- max_completion_tokens
- Maximum number of completion tokens to generate
Output Schema
Output
Example Execution Logs
Input token count: 208 Output token count: 2 Total token count: 210 TTFT: 0.27s
Version Details
- Version ID
b04f49b037b3a1476128f1c7434cf64385ccec6dc7d7d344647df0fc2103892c
- Version Created
- June 23, 2025