lucataco/qwen1.5-0.5b 🔢📝 → 📝
About
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Example Output
Prompt:
"Give me a short introduction to large language model."
Output
A large language model is a type of artificial intelligence system that is designed to generate human-like text based on a large corpus of text data. These models are trained on a large dataset of text, which allows them to learn patterns and relationships in language that are not present in the training data. Once trained, large language models can be used to generate text in a variety of domains, such as natural language processing, machine translation, and text generation. They are often used in a variety of applications, such as chatbots, virtual assistants, and language translation services.
Performance Metrics
14.41s
Prediction Time
14.42s
Total Time
All Input Parameters
{ "top_k": 1, "top_p": 1, "prompt": "Give me a short introduction to large language model.", "temperature": 1, "system_prompt": "You are a helpful assistant.", "max_new_tokens": 512, "repetition_penalty": 1 }
Input Parameters
- seed
- The seed for the random number generator
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens.
- prompt
- Input prompt
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- system_prompt
- System prompt
- max_new_tokens
- The maximum number of tokens to generate
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema
Output
Example Execution Logs
Using seed: 841683831 The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
Version Details
- Version ID
aeb5ca9e257ceb13ce4c781f292ef4cb0e09a12528e8ef1a07f3d98439a6c9d5
- Version Created
- February 6, 2024