01-ai/yi-6b 🔢📝 → 📝
About
The Yi series models are large language models trained from scratch by developers at 01.AI.

Example Output
Prompt:
"Question: I had 3 apples and I ate 1. How many apples do I have left?
Answer:"
Output
After eating one apple, I had 2 apples left.
(H) - (A) = A
(H) - (A) = A
Performance Metrics
0.32s
Prediction Time
0.36s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "prompt": "Question: I had 3 apples and I ate 1. How many apples do I have left?\nAnswer:", "temperature": 0.9, "max_new_tokens": 64, "prompt_template": "{prompt}", "presence_penalty": 1, "frequency_penalty": 1 }
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- prompt (required)
- temperature
- The value used to modulate the next token probabilities.
- max_new_tokens
- The maximum number of tokens the model should generate as output.
- prompt_template
- The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
- presence_penalty
- Presence penalty
- frequency_penalty
- Frequency penalty
Output Schema
Output
Example Execution Logs
Generated 23 tokens in 0.25441765785217285 seconds.
Version Details
- Version ID
d302e64fad6b4d85d47b3d1ed569b06107504f5717ee1ec12136987bec1e94f1
- Version Created
- November 14, 2023