openai/gpt-5-nano 📝❓🖼️🔢 → 📝
About
Fastest, most cost-effective GPT-5 model from OpenAI
Example Output
"Explain Bernoulli's principle"
Output
Bernoulli's principle is a statement in fluid dynamics that describes how the speed of a moving fluid relates to its pressure. In its most common form for incompressible, non-viscous, steady flow along a streamline:
- Where the fluid flows faster, its static pressure is lower.
- Where the fluid flows slower, its static pressure is higher.
Key points to understand:
Conservation of energy: In a horizontal flow (no height change), the Bernoulli equation can be written as:
P + ½ρv² = constant
where P is the static pressure, ρ is the fluid density, and v is the flow speed.
This expresses that the sum of pressure energy and kinetic energy per unit volume remains constant along a streamline.Interpretations:
- If the speed (v) increases, the pressure (P) must decrease to keep the total energy constant.
- If the speed decreases, the pressure increases.
Common examples and applications:
- Air over an airplane wing: The wing is shaped so air travels faster over the top surface than below. Faster flow on top lowers pressure, creating lift that helps the airplane rise.
- Venturi effect: In a constricted section of pipe, velocity increases and pressure decreases. This principle is used in carburetors, atomizers, and some flow meters.
- Chimney effect and sprinkler systems: Differences in air flow speed create pressure differences that drive movement of air or fluids.
- Baseball with a seam: Spin can affect local airflow and pressure around the ball, influencing its trajectory (though this is often a combination of Bernoulli and other effects).
Important caveats and common misconceptions:
- Bernoulli’s principle applies along a streamline for idealized flows (incompressible, non-viscous, steady). Real fluids have viscosity, turbulence, and energy losses, so the simple P + ½ρv² constant form is an idealization.
- It does not by itself explain lift entirely; a complete explanation of wing lift also involves Newton’s third law and the change in momentum of air, circulation, and the angle of attack. Bernoulli’s principle is a part of the broader fluid dynamics picture.
- Pressure differences can also arise from height differences (hydrostatic pressure) and from viscosity and flow separation; these factors mean real-world situations require careful analysis beyond the simplest version.
A simple takeaway: In a flowing fluid, speeding up the flow tends to lower the pressure, and slowing the flow tends to raise the pressure, when you can treat the flow as steady and along a streamline.
Performance Metrics
All Input Parameters
{
"prompt": "Explain Bernoulli's principle",
"messages": [],
"verbosity": "medium",
"image_input": [],
"reasoning_effort": "minimal"
}
Input Parameters
- prompt
- The prompt to send to the model. Do not use if using messages.
- messages
- A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
- verbosity
- Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high. GPT-5 supports this parameter to help control whether answers are short and to the point or long and comprehensive.
- image_input
- List of images to send to the model
- system_prompt
- System prompt to set the assistant's behavior
- reasoning_effort
- Constrains effort on reasoning for GPT-5 models. Currently supported values are minimal, low, medium, and high. The minimal value gets answers back faster without extensive reasoning first. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response. For higher reasoning efforts you may need to increase your max_completion_tokens to avoid empty responses (where all the tokens are used on reasoning).
- max_completion_tokens
- Maximum number of completion tokens to generate. For higher reasoning efforts you may need to increase your max_completion_tokens to avoid empty responses (where all the tokens are used on reasoning).
Output Schema
Output
Example Execution Logs
Input token count: 12 Output token count: 532 Total token count: 544 TTFT: 0.94s Finish reason: stop
Version Details
- Version ID
3e5e85c2393b64269755bdb0e900971f3d8b8f17736cfc46dc786629311c5030- Version Created
- September 25, 2025