tomasmcm/solar-10.7b-instruct-v1.0 📝🔢 → 📝
About
Source: upstage/SOLAR-10.7B-Instruct-v1.0 ✦ Quant: TheBloke/SOLAR-10.7B-Instruct-v1.0-AWQ ✦ Elevating Performance with Upstage Depth UP Scaling!
Example Output
Prompt:
"
### User:
Explain metamorphosis
Assistant:
"Output
Metamorphosis refers to the process of dramatic physical transformation or change that certain organisms undergo during their life cycle, particularly in animals. This term is most commonly associated with insects and amphibians, although it can also apply to certain other groups.
In insects, metamorphosis can broadly be categorized into two types: complete metamorphosis and incomplete metamorphosis. In complete metamorphosis, which occurs in butterflies, beetles, and moths, the life cycle consists of four distinct stages: egg, larva, pup
Performance Metrics
3.48s
Prediction Time
124.68s
Total Time
All Input Parameters
{
"top_k": -1,
"top_p": 0.95,
"prompt": "<s> ### User:\nExplain metamorphosis\n\n### Assistant:",
"max_tokens": 128,
"temperature": 0.8,
"presence_penalty": 0,
"frequency_penalty": 0
}
Input Parameters
- stop
- List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
- top_k
- Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
- top_p
- Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
- prompt (required)
- Text prompt to send to the model.
- max_tokens
- Maximum number of tokens to generate per output sequence.
- temperature
- Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
- presence_penalty
- Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
- frequency_penalty
- Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema
Output
Example Execution Logs
Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:03<00:00, 3.41s/it] Processed prompts: 100%|██████████| 1/1 [00:03<00:00, 3.41s/it] Generated 128 tokens in 3.415861129760742 seconds.
Version Details
- Version ID
5f53237a53dab757767a5795f396cf0a638fdbe151faf064665d8f0fb346c0f9- Version Created
- December 15, 2023