zhouhaojiang/qwen_32b 🔢📝 → 📝

▶️ 810 runs 📅 Nov 2024 ⚙️ Cog 0.13.0
code-generation question-answering text-generation text-translation

About

without examination qwen2.5 32b

Example Output

Prompt:

"你爱我吗"

Output

是的,我爱你!

Performance Metrics

0.44s Prediction Time
0.54s Total Time
All Input Parameters
{
  "top_k": 100,
  "top_p": 1,
  "prompt": "你爱我吗",
  "max_tokens": 1024,
  "temperature": 1,
  "system_prompt": ""
}
Input Parameters
top_k Type: integerDefault: 50Range: 1 - ∞
Controls the number of top candidates considered for the next token. Lower values make the output more focused, higher values make it more diverse.
top_p Type: numberDefault: 0.5Range: 0 - 1
Controls diversity of the output. Lower values make the output more focused, higher values make it more diverse.
prompt (required) Type: string
Input text for the model
max_tokens Type: integerDefault: 1024Range: 1 - ∞
Maximum number of tokens to generate
temperature Type: numberDefault: 0.5Range: 0 - 1
Controls randomness. Lower values make the model more deterministic, higher values make it more random.
system_prompt Type: stringDefault:
System prompt for the model
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Total runtime: 0.3696625232696533
Version Details
Version ID
94378aca5cdf8fb28951d729424b636737017abc9eb9e843bd710ff4169db012
Version Created
November 17, 2024
Run on Replicate →