meta/llama-4-maverick-instruct 🔢📝 → 📝

⭐ Official ▶️ 3.6M runs 📅 Apr 2025 ⚙️ Cog 0.16.9 ⚖️ License
code-generation question-answering text-generation text-translation

About

A 17 billion parameter model with 128 experts

Example Output

Prompt:

"Hello, Llama!"

Output

Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?

Performance Metrics

0.59s Prediction Time
0.60s Total Time
All Input Parameters
{
  "top_p": 1,
  "prompt": "Hello, Llama!",
  "max_tokens": 1024,
  "temperature": 0.6,
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
top_k Type: integerDefault: 50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
top_p Type: numberDefault: 0.9
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
prompt Type: stringDefault:
Prompt
max_tokens Type: integerDefault: 4096Range: 0 - 131072
The maximum number of tokens the model should generate as output.
min_tokens Type: integerDefault: 0
The minimum number of tokens the model should generate as output.
temperature Type: numberDefault: 0.6
The value used to modulate the next token probabilities.
system_prompt Type: stringDefault: You are a helpful assistant.
System prompt to send to the model. This is prepended to the prompt and helps guide system behavior. Ignored for non-chat models.
stop_sequences Type: stringDefault:
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
prompt_template Type: stringDefault:
A template to format the prompt with. If not provided, the default prompt template will be used.
presence_penalty Type: numberDefault: 0
Presence penalty
frequency_penalty Type: numberDefault: 0
Frequency penalty
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Prompt: Hello, Llama!
Input token count: 5
Output token count: 24
TTFT: 0.39s
Tokens per second: 40.51
Total time: 0.59s
Version Details
Version ID
629dc152227acd917296bbff0f16d98ebd2c3ab14aebb361358a3cfc9d0963b8
Version Created
January 22, 2026
Run on Replicate →