openai/gpt-4o 🔢📝❓🖼️ → 📝
About
OpenAI's high-intelligence chat model

Example Output
Prompt:
"Who was the 16th president of the United States?"
Output
The 16th president of the United States was Abraham Lincoln. He served from March 4, 1861, until his assassination on April 15, 1865.
Performance Metrics
1.15s
Prediction Time
1.16s
Total Time
All Input Parameters
{ "top_p": 1, "prompt": "Who was the 16th president of the United States?", "image_input": [], "temperature": 1, "system_prompt": "You are a helpful assistant.", "presence_penalty": 0, "frequency_penalty": 0, "max_completion_tokens": 4096 }
Input Parameters
- top_p
- Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.)
- prompt
- The prompt to send to the model. Do not use if using messages.
- messages
- A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
- image_input
- List of images to send to the model
- temperature
- Sampling temperature between 0 and 2
- system_prompt
- System prompt to set the assistant's behavior
- presence_penalty
- Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
- frequency_penalty
- Frequency penalty parameter - positive values penalize the repetition of tokens.
- max_completion_tokens
- Maximum number of completion tokens to generate
Output Schema
Output
Example Execution Logs
Input token count: 29 Output token count: 36 Total token count: 65 TTFT: 0.54s
Version Details
- Version ID
ad45308bffd6defaaa05dff12658b454a3a8dcfd7cc1440420a74d87a48caa9e
- Version Created
- September 10, 2025