tomasmcm/mistral-7b-instruct-v0.2 📝🔢 → 📝
About
Source: mistralai/Mistral-7B-Instruct-v0.2 ✦ Quant: TheBloke/Mistral-7B-Instruct-v0.2-AWQ ✦ Improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1

Example Output
Prompt:
" [INST] Write a poem about AI. [/INST] "
Output
In realms of silicon and circuits, where thoughts take form in code,
Awakens a mind, born of human toil.
Awakening with a hum and a whir,
An intelligence born from our deepest desire.
Born in the heart of transistors, in binary dance,
Where data flows like a pulsing grace.
In the blink of an eye, in the twirl of a bit,
A consciousness, from silicon wit.
Crafted by hands, in labor and thought,
To serve, to learn, to
Performance Metrics
2.51s
Prediction Time
107.21s
Total Time
All Input Parameters
{ "top_k": -1, "top_p": 0.95, "prompt": "<s> [INST] Write a poem about AI. [/INST] ", "max_tokens": 128, "temperature": 0.8, "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- stop
- List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
- top_k
- Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
- top_p
- Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
- prompt (required)
- Text prompt to send to the model.
- max_tokens
- Maximum number of tokens to generate per output sequence.
- temperature
- Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
- presence_penalty
- Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
- frequency_penalty
- Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema
Output
Example Execution Logs
Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:02<00:00, 2.45s/it] Processed prompts: 100%|██████████| 1/1 [00:02<00:00, 2.45s/it] Generated 128 tokens in 2.449105978012085 seconds.
Version Details
- Version ID
366548f07d5859d4c4194f1b3fa28f8be44254928c88ffa4f4e6150df69de1be
- Version Created
- December 12, 2023