lucataco/qwen1.5-72b 🔢📝 → 📝
About
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

Example Output
Prompt:
"Give me a short introduction to large language model."
Output
A large language model is a type of artificial intelligence system trained on an immense amount of text data, designed to understand, generate, and manipulate human language. These models are typically deep neural networks with billions of parameters, which enable them to learn complex patterns and relationships within language. The goal is to create a model that can perform various natural language processing tasks with high accuracy, such as language translation, text summarization, question-answering, and even conversational dialogue. By continuously learning from vast amounts of text, large language models have become increasingly proficient in understanding and generating coherent, contextually appropriate responses, making them a cornerstone of modern AI research and applications.
Performance Metrics
213.61s
Prediction Time
1014.37s
Total Time
All Input Parameters
{ "top_k": 1, "top_p": 1, "prompt": "Give me a short introduction to large language model.", "temperature": 1, "system_prompt": "You are a helpful assistant.", "max_new_tokens": 512, "repetition_penalty": 1 }
Input Parameters
- seed
- The seed for the random number generator
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens.
- prompt
- Input prompt
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- system_prompt
- System prompt
- max_new_tokens
- The maximum number of tokens to generate
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema
Output
Example Execution Logs
Using seed: 240117260 The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:151645 for open-end generation.
Version Details
- Version ID
f919d3c43a8758de744cf2908426dd744154120f0a22e457a3fa647acdfe33be
- Version Created
- February 6, 2024