openai/gpt-4.1 🔢📝❓🖼️ → 📝

⭐ Official ▶️ 199.8K runs 📅 May 2025 ⚙️ Cog 0.16.6 ⚖️ License
code-generation image-to-text text-generation

About

OpenAI's Flagship GPT model for complex tasks.

Example Output

Prompt:

"What is happening in this image?"

Input Images

Input image 1
Input 1

Output

In the image, someone is spreading butter on a slice of toast using a product labeled "BUTTER STICK TYPE." The product resembles a glue stick, but it is meant for butter, allowing the user to easily apply butter to toast by rubbing the stick directly onto the bread. This is a creative and convenient way to spread butter, especially while the toast is still warm and the butter softens and melts quickly.

Performance Metrics

3.99s Prediction Time
4.01s Total Time
All Input Parameters
{
  "prompt": "What is happening in this image?",
  "image_input": [
    "https://replicate.delivery/pbxt/MvnA4wptE8FOHD44bKsfVj8hQdXSvdDAcFgYs5GEODou9OP9/4b2ebb2d-89d8-43de-bc84-51c380365a40.jpg"
  ],
  "temperature": 1,
  "system_prompt": "You are a helpful assistant."
}
Input Parameters
top_p Type: numberDefault: 1Range: 0 - 1
Nucleus sampling parameter - the model considers the results of the tokens with top_p probability mass. (0.1 means only the tokens comprising the top 10% probability mass are considered.)
prompt Type: string
The prompt to send to the model. Do not use if using messages.
messages Type: arrayDefault:
A JSON string representing a list of messages. For example: [{"role": "user", "content": "Hello, how are you?"}]. If provided, prompt and system_prompt are ignored.
image_input Type: arrayDefault:
List of images to send to the model
temperature Type: numberDefault: 1Range: 0 - 2
Sampling temperature between 0 and 2
system_prompt Type: string
System prompt to set the assistant's behavior
presence_penalty Type: numberDefault: 0Range: -2 - 2
Presence penalty parameter - positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
frequency_penalty Type: numberDefault: 0Range: -2 - 2
Frequency penalty parameter - positive values penalize the repetition of tokens.
max_completion_tokens Type: integerDefault: 4096
Maximum number of completion tokens to generate
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Image size: 100.1KB
Generated response in 3.8sec
/usr/local/lib/python3.12/site-packages/cog/server/scope.py:21: ExperimentalFeatureWarning: current_scope is an experimental internal function. It may change or be removed without warning.
  warnings.warn(
Version Details
Version ID
90d0a88c78740b21d9c98d5b245bce12f06bb2aecf074c1b3108748c2f26ed9c
Version Created
September 16, 2025
Run on Replicate →