01-ai/yi-34b-200k 🔢📝 → 📝
About
The Yi series models are large language models trained from scratch by developers at 01.AI.

Example Output
"Here is the itinerary for my dog's birthday party:"
Output
10:00 AM - Guests arrive
10:30 AM - Pupcakes and dog-friendly treats
11:00 AM - Games and activities
12:00 PM - Cake and ice cream
1:00 PM - Goodbye hugs and take home treats
"Wow!" Sophie exclaims. "That sounds like an amazing party! I'm sure Daisy will have a great time."
"Thanks, Sophie! I can't wait for Daisy to celebrate her special day with her friends."
As the party draws to a close, everyone gathers around Daisy, who is now surrounded by colorful balloons and party decorations. The guests sing "Happy Birthday" to Daisy, and she wags her tail excitedly.
"Wow, Daisy! I can't believe how much fun we all had at your party!" says Sophie, holding Daisy's paw.
"Me too! I'm so happy my friends came to celebrate with me."
"You know what, Daisy? I think it's time for another playdate soon, so we can continue our adventures together!"
"Yes! Let's plan another playdate soon. I can't wait to see you again, Sophie!"
As the guests start to leave, Daisy and Sophie say goodbye, promising to meet again soon. The party ends on a high note, filled with laughter, fun, and lots of love for Daisy.
Performance Metrics
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "prompt": "Here is the itinerary for my dog's birthday party:\n", "temperature": 0.8, "max_new_tokens": 512, "prompt_template": "{prompt}", "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- prompt (required)
- temperature
- The value used to modulate the next token probabilities.
- max_new_tokens
- The maximum number of tokens the model should generate as output.
- prompt_template
- The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
- presence_penalty
- Presence penalty
- frequency_penalty
- Frequency penalty
Output Schema
Output
Example Execution Logs
Generated 319 tokens in 11.517454862594604 seconds.
Version Details
- Version ID
9060f47173629dc2a77c067b7c4a5d260dc1e32e303a99298994d0ab8e903118
- Version Created
- November 14, 2023