01-ai/yi-34b 🔢📝 → 📝
About
The Yi series models are large language models trained from scratch by developers at 01.AI.

Example Output
Prompt:
"Some great names for dogs are:"
Output
1.Lobo
2.Santana
3.Juice
4.Pearl
5.Pepper
6.Cash
7.Jager
8.Beer
9.Jack
10.Lucky
11.Beau
12.Boo
13.Dude
14.Cocoa
15.Piggy
16.Gypsy
17.Mocha
18.Lucy
19.Bud
20.Chi
21.ChiChi
22.M
2.Santana
3.Juice
4.Pearl
5.Pepper
6.Cash
7.Jager
8.Beer
9.Jack
10.Lucky
11.Beau
12.Boo
13.Dude
14.Cocoa
15.Piggy
16.Gypsy
17.Mocha
18.Lucy
19.Bud
20.Chi
21.ChiChi
22.M
Performance Metrics
4.67s
Prediction Time
4.71s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "prompt": "Some great names for dogs are:", "temperature": 0.8, "max_new_tokens": 128, "prompt_template": "{prompt}", "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- prompt (required)
- temperature
- The value used to modulate the next token probabilities.
- max_new_tokens
- The maximum number of tokens the model should generate as output.
- prompt_template
- The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
- presence_penalty
- Presence penalty
- frequency_penalty
- Frequency penalty
Output Schema
Output
Example Execution Logs
Generated 128 tokens in 4.597692012786865 seconds.
Version Details
- Version ID
d83ccf090ccd5c7fe507ca302a558a850468293385d02bb807ee2753d802dd85
- Version Created
- November 14, 2023