nomagick/qwen-14b-chat 🔢📝 → 📝

▶️ 5.4K runs 📅 Oct 2023 ⚙️ Cog 0.8.6 🔗 GitHub 📄 Paper ⚖️ License
code-generation question-answering text-generation text-translation

About

Qwen-14B-Chat is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc.

Example Output

Prompt:

"<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
请使用英文重复这段话:"为了使模型生成最优输出,当使用 Qwen 时需要使用特定的输入格式(chatml),请按照ChatML格式组织输入。" 之后介绍一下Chat ML格式<|im_end|>
<|im_start|>assistant"

Output

"To generate the optimal output from the model, a specific input format called ChatML is required when using Qwen. Please organize the input according to the ChatML format."

ChatML (Chat Markup Language) is a specific format designed for organizing inputs for natural language processing models. It is a markup language that allows users to structure their input in a way that is easy for the model to understand. ChatML includes various tags and attributes that can be used to specify different aspects of the input, such as the type of input (e.g. text, image), the language of the input, and any special instructions or requirements for processing the input. By using ChatML, users can ensure that their input is properly formatted and structured, which can help to improve the accuracy and efficiency of the model's output.

Performance Metrics

24.44s Prediction Time
451.95s Total Time
All Input Parameters
{
  "top_p": 0.8,
  "prompt": "<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>user\n请使用英文重复这段话:\"为了使模型生成最优输出,当使用 Qwen 时需要使用特定的输入格式(chatml),请按照ChatML格式组织输入。\" 之后介绍一下Chat ML格式<|im_end|>\n<|im_start|>assistant\n",
  "max_tokens": 2048,
  "temperature": 0.75
}
Input Parameters
top_p Type: numberDefault: 0.8Range: 0 - 1
Top_p
prompt Type: stringDefault: <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user 请使用英文重复这段话:"为了使模型生成最优输出,当使用 Qwen 时需要使用特定的输入格式(chatml),请按照示例格式组织输入。"<|im_end|> <|im_start|>assistant
Prompt for completion, in chatml format
max_tokens Type: integerDefault: 2048Range: 1 - 8192
Max new tokens to generate
temperature Type: numberDefault: 0.75Range: 0 - 5
Temperature
Output Schema

Output

Type: arrayItems Type: string

Version Details
Version ID
f9e1ed25e2073f72ff9a3f46545d909b1078e674da543e791dec79218072ae70
Version Created
October 9, 2023
Run on Replicate →