openai/gpt-4.1-nano 🔢📝❓🖼️ → 📝

⭐ Official ▶️ 430.6K runs 📅 May 2025 ⚙️ Cog 0.16.6 ⚖️ License
code-generation image-analysis image-to-text question-answering text-generation text-translation

About

Fastest, most cost-effective GPT-4.1 model from OpenAI

Example Output

Prompt:

"What is San Junipero?"

Output

San Junipero is a fictional town featured in the anthology series Black Mirror, specifically in the episode titled "San Junipero." The episode originally aired in 2016 and is one of the most acclaimed installments of the series.

In the story, San Junipero is depicted as a picturesque, nostalgic beach town set in California, primarily during the 1980s. It is revealed to be a virtual reality simulation where people can visit temporarily or permanently after death or while still alive. The town serves as a digital paradise, allowing residents to relive past memories, connect with loved ones, and choose to extend their consciousness beyond physical mortality.

The episode explores themes of love, memory, mortality, and the ethical dilemmas associated with virtual immortality, making San Junipero a significant and thought-provoking setting within the narrative.

Performance Metrics

1.19s Prediction Time
10.11s Total Time
All Input Parameters
{
  "prompt": "What is San Junipero?",
  "temperature": 1,
  "system_prompt": "You are a helpful assistant."
}
Input Parameters
top_p Type: numberDefault: 1Range: 0 - 1
Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.)
prompt Type: string
The prompt to send to the model. Do not use if using messages.
messages Type: arrayDefault:
A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
image_input Type: arrayDefault:
List of images to send to the model
temperature Type: numberDefault: 1Range: 0 - 2
Sampling temperature between 0 and 2
system_prompt Type: string
System prompt to set the assistant's behavior
presence_penalty Type: numberDefault: 0Range: -2 - 2
Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
frequency_penalty Type: numberDefault: 0Range: -2 - 2
Frequency penalty parameter - positive values penalize the repetition of tokens.
max_completion_tokens Type: integerDefault: 4096
Maximum number of completion tokens to generate
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Generated response in 1.2sec
/usr/local/lib/python3.12/site-packages/cog/server/scope.py:21: ExperimentalFeatureWarning: current_scope is an experimental internal function. It may change or be removed without warning.
  warnings.warn(
Version Details
Version ID
ec69f9c119aaf7d5a851b4dc9d08337a16906ee397a5d3d492fd52fbc59ea2a1
Version Created
September 16, 2025
Run on Replicate →