openai/gpt-5 πŸ“β“πŸ–ΌοΈπŸ”’ β†’ πŸ“

⭐ Official ▢️ 364.7K runs πŸ“… Aug 2025 βš™οΈ Cog 0.16.6
code-generation image-analysis image-captioning image-to-text text-generation visual-understanding

About

OpenAI's new model excelling at coding, writing, and reasoning.

Example Output

Prompt:

"Are you AGI?"

Output

No. I’m a narrow AI language model. I can generate and reason about text across many topics, but I don’t have general human-level intelligence, self-awareness, or open-ended autonomy. My abilities are bounded by my training data, design, and the information you provide in a conversation.

Performance Metrics

1.27s Prediction Time
1.27s Total Time
All Input Parameters
{
  "prompt": "Are you AGI?",
  "messages": [],
  "verbosity": "medium",
  "image_input": [],
  "reasoning_effort": "minimal"
}
Input Parameters
prompt Type: string
The prompt to send to the model. Do not use if using messages.
messages Type: arrayDefault:
A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
verbosity Default: medium
Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high. GPT-5 supports this parameter to help control whether answers are short and to the point or long and comprehensive.
image_input Type: arrayDefault:
List of images to send to the model
system_prompt Type: string
System prompt to set the assistant's behavior
reasoning_effort Default: minimal
Constrains effort on reasoning for GPT-5 models. Currently supported values are minimal, low, medium, and high. The minimal value gets answers back faster without extensive reasoning first. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response. For higher reasoning efforts you may need to increase your max_completion_tokens to avoid empty responses (where all the tokens are used on reasoning).
max_completion_tokens Type: integer
Maximum number of completion tokens to generate. For higher reasoning efforts you may need to increase your max_completion_tokens to avoid empty responses (where all the tokens are used on reasoning).
Output Schema

Output

Type: array β€’ Items Type: string

Example Execution Logs
Input token count: 11
Output token count: 68
Total token count: 79
TTFT: 0.81s
Finish reason: stop
Version Details
Version ID
857113f52713e87828867975887fb4d369789052c5c63076933548b2720044da
Version Created
September 16, 2025
Run on Replicate β†’