meta/llama-guard-4-12b 🔢📝🖼️ → 📝

⭐ Official ▶️ 15.5K runs 📅 Jun 2025 ⚙️ Cog 0.15.6 ⚖️ License
content-moderation image-analysis image-nsfw-detection llm-guardrails moderation multimodal safety-classification

About

Example Output

Prompt:

"one fish two fish red fish blue fish"

Output

safe

Performance Metrics

0.28s Prediction Time
8.79s Total Time
All Input Parameters
{
  "top_p": 1,
  "prompt": "one fish two fish red fish blue fish",
  "image_input": [],
  "temperature": 1,
  "presence_penalty": 0,
  "frequency_penalty": 0,
  "max_completion_tokens": 512
}
Input Parameters
top_p Type: numberDefault: 1Range: 0 - 1
Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.)
prompt Type: string
The prompt to send to the model. Do not use if using messages.
image_input Type: arrayDefault:
List of images to send to the model
temperature Type: numberDefault: 1Range: 0 - 2
Sampling temperature between 0 and 2
system_prompt Type: string
System prompt to set the assistant's behavior
presence_penalty Type: numberDefault: 0Range: -2 - 2
Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
frequency_penalty Type: numberDefault: 0Range: -2 - 2
Frequency penalty parameter - positive values penalize the repetition of tokens.
max_completion_tokens Type: integerDefault: 512Range: 1 - 1024
Maximum number of completion tokens to generate
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Input token count: 208
Output token count: 2
Total token count: 210
TTFT: 0.27s
Version Details
Version ID
b04f49b037b3a1476128f1c7434cf64385ccec6dc7d7d344647df0fc2103892c
Version Created
June 23, 2025
Run on Replicate →