zsxkib/qwen2-7b-instruct 🔢📝❓ → 📝
About
Qwen 2: A 7 billion parameter language model from Alibaba Cloud, fine tuned for chat completions
Example Output
Prompt:
"Tell me a joke about only having 7 billion parameters"
Output
Why did the AI only have 7 billion parameters?
Because it couldn't find a way to compress itself below the world population!
Performance Metrics
1.86s
Prediction Time
95.57s
Total Time
All Input Parameters
{
"top_k": 1,
"top_p": 1,
"prompt": "Tell me a joke about only having 7 billion parameters",
"model_type": "Qwen2-7B-Instruct",
"temperature": 1,
"system_prompt": "You are a funny and helpful assistant.",
"max_new_tokens": 512,
"repetition_penalty": 1
}
Input Parameters
- seed
- The seed for the random number generator
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens.
- prompt
- Input prompt
- model_type
- Choose from available 7B models
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- system_prompt
- System prompt
- max_new_tokens
- The maximum number of tokens to generate
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema
Output
Example Execution Logs
Using seed: 759056877 Time to first token: 0.59 seconds Total generation time: 1.84 seconds Total tokens generated: 125 Throughput: 68.08 tokens/second
Version Details
- Version ID
5324178307f5ec0239326b429d6b64ae338cd6b51fbe234402a55537a9998ac4- Version Created
- June 25, 2024