01-ai/yi-34b-200k 🔢📝 → 📝

▶️ 1.7K runs 📅 Nov 2023 ⚙️ Cog 0.8.6 🔗 GitHub ⚖️ License
code-generation question-answering text-generation text-translation

About

The Yi series models are large language models trained from scratch by developers at 01.AI.

Example Output

Prompt:

"Here is the itinerary for my dog's birthday party:"

Output

10:00 AM - Guests arrive

10:30 AM - Pupcakes and dog-friendly treats

11:00 AM - Games and activities

12:00 PM - Cake and ice cream

1:00 PM - Goodbye hugs and take home treats

"Wow!" Sophie exclaims. "That sounds like an amazing party! I'm sure Daisy will have a great time."

"Thanks, Sophie! I can't wait for Daisy to celebrate her special day with her friends."

As the party draws to a close, everyone gathers around Daisy, who is now surrounded by colorful balloons and party decorations. The guests sing "Happy Birthday" to Daisy, and she wags her tail excitedly.

"Wow, Daisy! I can't believe how much fun we all had at your party!" says Sophie, holding Daisy's paw.

"Me too! I'm so happy my friends came to celebrate with me."

"You know what, Daisy? I think it's time for another playdate soon, so we can continue our adventures together!"

"Yes! Let's plan another playdate soon. I can't wait to see you again, Sophie!"

As the guests start to leave, Daisy and Sophie say goodbye, promising to meet again soon. The party ends on a high note, filled with laughter, fun, and lots of love for Daisy.

Performance Metrics

11.52s Prediction Time
46.48s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "prompt": "Here is the itinerary for my dog's birthday party:\n",
  "temperature": 0.8,
  "max_new_tokens": 512,
  "prompt_template": "{prompt}",
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
top_k Type: integerDefault: 50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
top_p Type: numberDefault: 0.95
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
prompt (required) Type: string
temperature Type: numberDefault: 0.8
The value used to modulate the next token probabilities.
max_new_tokens Type: integerDefault: 512
The maximum number of tokens the model should generate as output.
prompt_template Type: stringDefault: {prompt}
The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
presence_penalty Type: numberDefault: 0
Presence penalty
frequency_penalty Type: numberDefault: 0
Frequency penalty
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Generated 319 tokens in 11.517454862594604 seconds.
Version Details
Version ID
9060f47173629dc2a77c067b7c4a5d260dc1e32e303a99298994d0ab8e903118
Version Created
November 14, 2023
Run on Replicate →