01-ai/yi-6b 🔢📝 → 📝

▶️ 161.1K runs 📅 Nov 2023 ⚙️ Cog 0.8.6 🔗 GitHub ⚖️ License
code-generation question-answering text-generation text-translation

About

The Yi series models are large language models trained from scratch by developers at 01.AI.

Example Output

Prompt:

"Question: I had 3 apples and I ate 1. How many apples do I have left?
Answer:"

Output

After eating one apple, I had 2 apples left.
(H) - (A) = A

Performance Metrics

0.32s Prediction Time
0.36s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "prompt": "Question: I had 3 apples and I ate 1. How many apples do I have left?\nAnswer:",
  "temperature": 0.9,
  "max_new_tokens": 64,
  "prompt_template": "{prompt}",
  "presence_penalty": 1,
  "frequency_penalty": 1
}
Input Parameters
top_k Type: integerDefault: 50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
top_p Type: numberDefault: 0.95
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
prompt (required) Type: string
temperature Type: numberDefault: 0.8
The value used to modulate the next token probabilities.
max_new_tokens Type: integerDefault: 512
The maximum number of tokens the model should generate as output.
prompt_template Type: stringDefault: {prompt}
The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
presence_penalty Type: numberDefault: 0
Presence penalty
frequency_penalty Type: numberDefault: 0
Frequency penalty
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Generated 23 tokens in 0.25441765785217285 seconds.
Version Details
Version ID
d302e64fad6b4d85d47b3d1ed569b06107504f5717ee1ec12136987bec1e94f1
Version Created
November 14, 2023
Run on Replicate →