openai/gpt-4.1-nano 🔢📝❓🖼️ → 📝
About
Fastest, most cost-effective GPT-4.1 model from OpenAI
Example Output
"What is San Junipero?"
Output
San Junipero is a fictional town featured in the anthology series Black Mirror, specifically in the episode titled "San Junipero." The episode originally aired in 2016 and is one of the most acclaimed installments of the series.
In the story, San Junipero is depicted as a picturesque, nostalgic beach town set in California, primarily during the 1980s. It is revealed to be a virtual reality simulation where people can visit temporarily or permanently after death or while still alive. The town serves as a digital paradise, allowing residents to relive past memories, connect with loved ones, and choose to extend their consciousness beyond physical mortality.
The episode explores themes of love, memory, mortality, and the ethical dilemmas associated with virtual immortality, making San Junipero a significant and thought-provoking setting within the narrative.
Performance Metrics
All Input Parameters
{
"prompt": "What is San Junipero?",
"temperature": 1,
"system_prompt": "You are a helpful assistant."
}
Input Parameters
- top_p
- Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.)
- prompt
- The prompt to send to the model. Do not use if using messages.
- messages
- A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
- image_input
- List of images to send to the model
- temperature
- Sampling temperature between 0 and 2
- system_prompt
- System prompt to set the assistant's behavior
- presence_penalty
- Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
- frequency_penalty
- Frequency penalty parameter - positive values penalize the repetition of tokens.
- max_completion_tokens
- Maximum number of completion tokens to generate
Output Schema
Output
Example Execution Logs
Generated response in 1.2sec /usr/local/lib/python3.12/site-packages/cog/server/scope.py:21: ExperimentalFeatureWarning: current_scope is an experimental internal function. It may change or be removed without warning. warnings.warn(
Version Details
- Version ID
ec69f9c119aaf7d5a851b4dc9d08337a16906ee397a5d3d492fd52fbc59ea2a1- Version Created
- September 16, 2025