lucataco/qwen1.5-4b 🔢📝 → 📝
About
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
Example Output
Prompt:
"Give me a short introduction to large language model."
Output
A large language model is a type of artificial intelligence system that is trained on a large corpus of text data to learn the patterns and relationships between words and phrases. These models are designed to be able to generate human-like responses to a wide range of prompts and questions, and are often used in applications such as chatbots, language translation, and content generation. Large language models are typically trained using deep learning techniques, which involve feeding the model large amounts of text data and adjusting its parameters to optimize its performance on a specific task. Some of the most well-known large language models include GPT-3, BERT, and RoBERTa.
Performance Metrics
56.36s
Prediction Time
262.79s
Total Time
All Input Parameters
{
"top_k": 1,
"top_p": 1,
"prompt": "Give me a short introduction to large language model.",
"temperature": 1,
"system_prompt": "You are a helpful assistant.",
"max_new_tokens": 512,
"repetition_penalty": 1
}
Input Parameters
- seed
- The seed for the random number generator
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens.
- prompt
- Input prompt
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- system_prompt
- System prompt
- max_new_tokens
- The maximum number of tokens to generate
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema
Output
Example Execution Logs
Using seed: 1072508970 The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
Version Details
- Version ID
5f048d18c298b8ff9a8eac1e2de9f7f948bf4ede3cd9071411098a9c24f46b73- Version Created
- February 6, 2024