ibm-granite/granite-3.1-2b-instruct 🔢📝 → 📝
About
Granite-3.1-2B-Instruct is a lightweight and open-source 2B parameter model designed to excel in instruction following tasks such as summarization, problem-solving, text translation, reasoning, code tasks, function-calling, and more.

Example Output
Prompt:
"Please list an IBM Research laboratory located in the United States."
Output
One of IBM's renowned research laboratories in the United States is the IBM Laboratory located in Almaden, California. Established in 1952, it's one of the oldest and largest research centers in the world, focusing on a broad spectrum of disciplines, including computer science, services, storage systems, and materials science and technology. The Almaden lab has made significant contributions to various technological advancements, including the development of the IBM Personal Computer, the RAID storage technology, and the IBM Chemistry Field-Programmable Gate Array (C-FFA).
Performance Metrics
2.55s
Prediction Time
106.28s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.9, "prompt": "Please list an IBM Research laboratory located in the United States.", "max_tokens": 512, "min_tokens": 0, "temperature": 0.6, "system_prompt": "You are an expert in tech history and can answer questions eloquently about the milestones of software organizations.", "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- prompt
- Prompt
- max_tokens
- The maximum number of tokens the model should generate as output.
- min_tokens
- The minimum number of tokens the model should generate as output.
- temperature
- The value used to modulate the next token probabilities.
- system_prompt
- System prompt to send to the model. This is prepended to the prompt and helps guide system behavior. Ignored for non-chat models.
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
- presence_penalty
- Presence penalty
- frequency_penalty
- Frequency penalty
Output Schema
Output
Example Execution Logs
INFO: ::1:53940 - "POST /v1/chat/completions HTTP/1.1" 200 OK Generation took 2.55sFormatted prompt: Please list an IBM Research laboratory located in the United States.
Version Details
- Version ID
7b0098e3d43b2c13c23a78ee680b1797e1149b8b0ea492dd8b2b3ccc78f8a94b
- Version Created
- December 19, 2024