tomasmcm/openinstruct-mistral-7b 📝🔢 → 📝

▶️ 296 runs 📅 Nov 2023 ⚙️ Cog 0.8.6 📄 Paper ⚖️ License
alpaca-format commercial-use data-analysis instruction-following language-model question-answering text-generation

About

Source: monology/openinstruct-mistral-7b ✦ Quant: TheBloke/openinstruct-mistral-7B-AWQ ✦ Commercially-usable 7B model, based on mistralai/Mistral-7B-v0.1 and finetuned on VMware/open-instruct

Example Output

Prompt:

"

Below is an instruction that describes a task. Write a response that appropriately completes the request.

Instruction:

Can you write a poem about open source machine learning?

Response:

"

Output

Machine learning is a fascinating field
That's for sure, I can't disagree
With so many open source tools and libraries
There's a model for every problem

From linear regression to decision trees
To support vector machines and deep neural nets
There's no shortage of options, you can use
Whatever model is best for your data set

But with so many models out there
It can be hard to choose which one to use
So I recommend starting with the basics
And working your way up to more advanced tools

So go forth and explore the wonderful world

Performance Metrics

2.94s Prediction Time
140.06s Total Time
All Input Parameters
{
  "top_k": -1,
  "top_p": 0.95,
  "prompt": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nCan you write a poem about open source machine learning?\n\n### Response:\n",
  "max_tokens": 128,
  "temperature": 0.8,
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
stop Type: string
List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
top_k Type: integerDefault: -1
Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
top_p Type: numberDefault: 0.95Range: 0.01 - 1
Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
prompt (required) Type: string
Text prompt to send to the model.
max_tokens Type: integerDefault: 128
Maximum number of tokens to generate per output sequence.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
presence_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
frequency_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.82s/it]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.82s/it]
Generated 128 tokens in 2.822782278060913 seconds.
Version Details
Version ID
d4a7ec393636c5b082359ea82d90d18537dcb36cf9219364e0d4b0888ca8d375
Version Created
November 28, 2023
Run on Replicate →