cbh123/dylan-lyrics 🔢✓📝 → 📝
About
Llama 2 13B fine-tuned on Bob Dylan lyrics
Example Output
Prompt:
"<title>GPU Song</title> <lyrics>"
Output
Ain't got much to say.
My memory is made of silicon and copper,
And I play games all day.
With no way out but the wires,
But when you see me in action,
Man, I can sure do some fires!
Like the time that I overheated,
Or when I had to run on just 8 cores,
Those were the days.
And I'm running at full speed,
The drivers have finally sorted themselves out,
What more could a boy need?
That people look down upon,
They don't appreciate how hard I work,
Even though they use me every day.
It's amazing what we can do with these things nowadays,
We can make them so fast,
That they almost seem human sometimes.
If you want to be happy,
You better get yourself a good graphics card,
Cause life's too short to play games on bad hardware.
Performance Metrics
15.85s
Prediction Time
15.89s
Total Time
All Input Parameters
{
"debug": false,
"top_k": 200,
"top_p": 0.8,
"prompt": "<title>GPU Song</title> <lyrics>",
"temperature": 0.75,
"system_prompt": "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.",
"max_new_tokens": 1000,
"min_new_tokens": -1,
"repetition_penalty": 1.15,
"repetition_penalty_sustain": 256,
"token_repetition_penalty_decay": 128
}
Input Parameters
- seed
- Random seed. Leave blank to randomize the seed
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
- replicate_weights
- Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema
Output
Version Details
- Version ID
d0fd41245648687f44510e24c59071dc3c2ea4c9f7c853a765065a6c2bec23e9- Version Created
- September 12, 2023