nateraw/axolotl-llama-2-7b-english-to-hinglish 🔢📝✓ → 📝
About

Example Output
Prompt:
"What's happening?"
Output
kya ho raha hai?
Performance Metrics
0.59s
Prediction Time
0.63s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "prompt": "What's happening?", "do_sample": true, "temperature": 0.9, "max_new_tokens": 512, "prompt_template": "### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nTranslate the input from English to Hinglish\n\n### Input:\n{prompt}\n\n### Response:\n" }
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- prompt (required)
- do_sample
- Whether or not to use sampling; otherwise use greedy decoding.
- temperature
- The value used to modulate the next token probabilities.
- max_new_tokens
- The maximum number of tokens the model should generate as output.
- prompt_template
- The template used to format the prompt before passing it to the model. For no template, you can set this to `{prompt}`.
Output Schema
Output
Example Execution Logs
=== Formatted Prompt === ### System: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Translate the input from English to Hinglish ### Input: What's happening? ### Response: ========= END ==========
Version Details
- Version ID
03c8cc6582309c28ec5fdea84c94f49085fb105a1137f4771525376a88d8d95f
- Version Created
- January 8, 2024