fofr/star-trek-adventure 🔢📝❓ → 📝
About
Example Output
Prompt:
"<universe>"
Output
Performance Metrics
24.95s
Prediction Time
491.26s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 1, "prompt": "<universe>", "decoding": "top_p", "max_length": 500, "temperature": 0.75, "repetition_penalty": 1.2 }
Input Parameters
- top_k
- Valid if you choose top_k decoding. The number of highest probability vocabulary tokens to keep for top-k-filtering
- top_p
- Valid if you choose top_p decoding. When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Input Prompt.
- decoding
- Choose a decoding method
- max_length
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema
Output
Example Execution Logs
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Version Details
- Version ID
bb5c6d426fabd3736faf7243ea70d2cdbc0f8131b22953386759a9b4d2858aad
- Version Created
- April 19, 2023