nateraw/samsum-llama-7b 🔢✓📝 → 📝
About
llama-2-7b fine-tuned on the samsum dataset for dialogue summarization
Example Output
Prompt:
"
[INST] <
Use the Input to provide a summary of a conversation.
<
Input:
Harry: Who are you?
Hagrid: Rubeus Hagrid, Keeper of Keys and Grounds at Hogwarts. Of course, you know all about Hogwarts.
Harry: Sorry, no.
Hagrid: No? Blimey, Harry, did you never wonder where yer parents learned it all?
Harry: All what?
Hagrid: Yer a wizard, Harry.
Harry: I-- I'm a what?
Hagrid: A wizard! And a thumpin' good 'un, I'll wager, once you've been trained up a bit. [/INST]
Summary:
"Output
Harry is a wizard.
Performance Metrics
0.73s
Prediction Time
0.74s
Total Time
All Input Parameters
{
"debug": false,
"top_k": 50,
"top_p": 0.95,
"prompt": "[INST] <<SYS>>\nUse the Input to provide a summary of a conversation.\n<</SYS>>\n\nInput:\nHarry: Who are you?\nHagrid: Rubeus Hagrid, Keeper of Keys and Grounds at Hogwarts. Of course, you know all about Hogwarts.\nHarry: Sorry, no.\nHagrid: No? Blimey, Harry, did you never wonder where yer parents learned it all?\nHarry: All what?\nHagrid: Yer a wizard, Harry.\nHarry: I-- I'm a what?\nHagrid: A wizard! And a thumpin' good 'un, I'll wager, once you've been trained up a bit. [/INST]\n\nSummary: ",
"temperature": 0.4,
"max_new_tokens": 256,
"min_new_tokens": -1,
"stop_sequences": "</s>"
}
Input Parameters
- seed
- Random seed. Leave blank to randomize the seed
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
- replicate_weights
- Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema
Output
Example Execution Logs
Your formatted prompt is: [INST] <<SYS>> Use the Input to provide a summary of a conversation. <</SYS>> Input: Harry: Who are you? Hagrid: Rubeus Hagrid, Keeper of Keys and Grounds at Hogwarts. Of course, you know all about Hogwarts. Harry: Sorry, no. Hagrid: No? Blimey, Harry, did you never wonder where yer parents learned it all? Harry: All what? Hagrid: Yer a wizard, Harry. Harry: I-- I'm a what? Hagrid: A wizard! And a thumpin' good 'un, I'll wager, once you've been trained up a bit. [/INST] Summary:
Version Details
- Version ID
16665c1f00ad4d5d6c88393aa05390cf4d9f4e49c8abde4c58f2e1e71fd806f9- Version Created
- September 6, 2023