nateraw/samsum-llama-7b 🔢✓📝 → 📝

▶️ 140 runs 📅 Sep 2023 ⚙️ Cog 0.8.1 🔗 GitHub 📄 Paper ⚖️ License
dialogue-summarization summarization text-generation

About

llama-2-7b fine-tuned on the samsum dataset for dialogue summarization

Example Output

Prompt:

"

[INST] <>
Use the Input to provide a summary of a conversation.
<
>

Input:
Harry: Who are you?
Hagrid: Rubeus Hagrid, Keeper of Keys and Grounds at Hogwarts. Of course, you know all about Hogwarts.
Harry: Sorry, no.
Hagrid: No? Blimey, Harry, did you never wonder where yer parents learned it all?
Harry: All what?
Hagrid: Yer a wizard, Harry.
Harry: I-- I'm a what?
Hagrid: A wizard! And a thumpin' good 'un, I'll wager, once you've been trained up a bit. [/INST]

Summary:

"

Output

Harry is a wizard.

Performance Metrics

0.73s Prediction Time
0.74s Total Time
All Input Parameters
{
  "debug": false,
  "top_k": 50,
  "top_p": 0.95,
  "prompt": "[INST] <<SYS>>\nUse the Input to provide a summary of a conversation.\n<</SYS>>\n\nInput:\nHarry: Who are you?\nHagrid: Rubeus Hagrid, Keeper of Keys and Grounds at Hogwarts. Of course, you know all about Hogwarts.\nHarry: Sorry, no.\nHagrid: No? Blimey, Harry, did you never wonder where yer parents learned it all?\nHarry: All what?\nHagrid: Yer a wizard, Harry.\nHarry: I-- I'm a what?\nHagrid: A wizard! And a thumpin' good 'un, I'll wager, once you've been trained up a bit. [/INST]\n\nSummary: ",
  "temperature": 0.4,
  "max_new_tokens": 256,
  "min_new_tokens": -1,
  "stop_sequences": "</s>"
}
Input Parameters
seed Type: integer
Random seed. Leave blank to randomize the seed
debug Type: booleanDefault: false
provide debugging output in logs
top_k Type: integerDefault: 50Range: 0 - ∞
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
replicate_weights Type: string
Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Your formatted prompt is:
[INST] <<SYS>>
Use the Input to provide a summary of a conversation.
<</SYS>>
Input:
Harry: Who are you?
Hagrid: Rubeus Hagrid, Keeper of Keys and Grounds at Hogwarts. Of course, you know all about Hogwarts.
Harry: Sorry, no.
Hagrid: No? Blimey, Harry, did you never wonder where yer parents learned it all?
Harry: All what?
Hagrid: Yer a wizard, Harry.
Harry: I-- I'm a what?
Hagrid: A wizard! And a thumpin' good 'un, I'll wager, once you've been trained up a bit. [/INST]
Summary:
Version Details
Version ID
16665c1f00ad4d5d6c88393aa05390cf4d9f4e49c8abde4c58f2e1e71fd806f9
Version Created
September 6, 2023
Run on Replicate →