openai/gpt-5 πβπΌοΈπ’ β π
About
OpenAI's new model excelling at coding, writing, and reasoning.

Example Output
Prompt:
"Are you AGI?"
Output
No. Iβm a narrow AI language model. I can generate and reason about text across many topics, but I donβt have general human-level intelligence, self-awareness, or open-ended autonomy. My abilities are bounded by my training data, design, and the information you provide in a conversation.
Performance Metrics
1.27s
Prediction Time
1.27s
Total Time
All Input Parameters
{ "prompt": "Are you AGI?", "messages": [], "verbosity": "medium", "image_input": [], "reasoning_effort": "minimal" }
Input Parameters
- prompt
- The prompt to send to the model. Do not use if using messages.
- messages
- A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
- verbosity
- Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high. GPT-5 supports this parameter to help control whether answers are short and to the point or long and comprehensive.
- image_input
- List of images to send to the model
- system_prompt
- System prompt to set the assistant's behavior
- reasoning_effort
- Constrains effort on reasoning for GPT-5 models. Currently supported values are minimal, low, medium, and high. The minimal value gets answers back faster without extensive reasoning first. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response. For higher reasoning efforts you may need to increase your max_completion_tokens to avoid empty responses (where all the tokens are used on reasoning).
- max_completion_tokens
- Maximum number of completion tokens to generate. For higher reasoning efforts you may need to increase your max_completion_tokens to avoid empty responses (where all the tokens are used on reasoning).
Output Schema
Output
Example Execution Logs
Input token count: 11 Output token count: 68 Total token count: 79 TTFT: 0.81s Finish reason: stop
Version Details
- Version ID
857113f52713e87828867975887fb4d369789052c5c63076933548b2720044da
- Version Created
- September 16, 2025