mistralai/mistral-7b-v0.1 🔢✓📝 → 📝

⭐ Official ▶️ 1.9M runs 📅 Sep 2023 ⚙️ Cog 0.8.6 🔗 GitHub 📄 Paper ⚖️ License
code-generation document-summarization text-generation

About

A 7 billion parameter language model from Mistral.

Example Output

Prompt:

"how are you doing today? "

Output

how are you doing today? The good news is that there were no big emergencies, but there was a lot of stuff to do. I’m still getting used to the new routine, but I think I can handle it. I’ve been trying to get up earlier, but I usually end up staying up later. I had the opportunity to work on a bit of my artwork for the first time in a while, and I think I’m going to try to make that a regular thing. I also got to do some reading, which is a big deal for me. I also tried to make something for dinner, but I made a big mess and didn’t really end up finishing anything, so I think I’ll try

Performance Metrics

7.09s Prediction Time
7.08s Total Time
All Input Parameters
{
  "prompt": "how are you doing today? ",
  "temperature": 0.7,
  "max_new_tokens": 150
}
Input Parameters
seed Type: integer
Random seed. Leave blank to randomize the seed
debug Type: booleanDefault: false
provide debugging output in logs
top_k Type: integerDefault: 50Range: 0 - ∞
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
replicate_weights Type: string
Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Version Details
Version ID
3e8a0fb6d7812ce30701ba597e5080689bef8a013e5c6a724fafb108cc2426a0
Version Created
September 29, 2023
Run on Replicate →