saysharastuff/olmo-2-1124-13b-instruct 🔢📝 → 📝
About
allenai/OLMo-2-1124-13B-Instruct, text generation model

Example Output
Prompt:
"Hello!"
Output
Hello! How can I assist you today? If you have any questions or need information on a wide range of topics, feel free to ask.
Performance Metrics
42.45s
Prediction Time
42.46s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "prompt": "Hello!", "system": "You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.", "max_length": 100, "temperature": 0.7 }
Input Parameters
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Text prompt to send to the model.
- system
- System prompt to send to the model. This is prepended to the prompt and helps guide system behavior.
- max_length
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
Output Schema
Output
Version Details
- Version ID
701ff596bb45a62826fb0f7399f5a5fc665fa6997af3fbb3cb2ae580ab828329
- Version Created
- January 11, 2025