arthur630-tech/mob 🔢📝 → 📝
About
Example Output
Prompt:
"你好呀"
Output
你好呀!我叫李红,是中国上海的。上海是中国最大的城市,人口有14亿人。这里有很多现代化的建筑,上海有很多世界级的博物馆,例如上海博物馆。上海还非常擅
Performance Metrics
3.94s
Prediction Time
149.40s
Total Time
All Input Parameters
{ "n": 1, "top_k": 100, "top_p": 1, "prompt": "你好呀", "max_length": 50, "temperature": 0.75, "repetition_penalty": 1 }
Input Parameters
- n
- Number of output sequences to generate
- top_k
- 计算每个可能的下一个 token 的概率分布
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Text prompt to send to the model.
- max_length
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema
Output
Example Execution Logs
start generate end generate
Version Details
- Version ID
209e2fde138326f89fdb246df01426ffd698560f66a4dc8d01736ecca98d2d97
- Version Created
- December 18, 2024