fofr/star-trek-llama ✓🔢📝 → 📝
About
llama-7b trained on the Memory Alpha Star Trek Wiki

Example Output
Prompt:
"Here is a synopsis for a Star Trek episode where Jean Luc Picard hallucinates about Super Mario:"
Output
Jean Luc Picard has a dream where he's trapped under a boulder while trying to escape, and has to climb out. He then runs into a wall and is attacked by Mario, but kicks him away. He finds Riker, who is now a turtle, and tells him to hit him. Just when Riker is about to punch Picard, he wakes up. Later, Picard talks about his dream with Riker, who says that it's probably due to the lack of sleep he's been getting. Riker agrees and says it's time to go because Picard has to get the ship ready for a meeting with the Ferengi. Picard turns, looks at Riker, and says 'It's a me!' Riker turns to look at him, then screams in fear. Picard says 'We must find the Ferengi!' and runs off through the corridors. Riker chases him for a moment, then turns and says 'Okay, I'm out. I'll see you at the Ferengi meeting.' He then walks out of the captain's quarters. Picard looks at the door, then says 'I won't fail you, Riker.' He opens the door and sees Riker standing in front of him. 'Riker! You're a turtle! You're a turtle!' Riker says 'I'm not a turtle. I'm a turtle shell!' 'I'm a turtle!' 'I'm not a turtle. I'm a turtle shell!' 'I'm a turtle shell!' 'I'm not a turtle shell! I'm a turtle!' 'I'm a turtle shell!' 'I'm not a turtle shell! I'm a turtle!' 'I'm a turtle shell!' 'I'm not a turtle shell! I'm a turtle!' 'I'm a turtle!' 'I'm a turtle shell!' 'I'm a turtle!' 'I'm a turt
Performance Metrics
19.72s
Prediction Time
19.88s
Total Time
All Input Parameters
{ "top_p": 1, "prompt": "Here is a synopsis for a Star Trek episode where Jean Luc Picard hallucinates about Super Mario:", "max_length": 500, "temperature": 0.75, "repetition_penalty": 1 }
Input Parameters
- debug
- provide debugging output in logs
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to Llama.
- max_length
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema
Output
Version Details
- Version ID
f68c77246547da41231789a135e5383e801bf2bb73ea811cb7053a703ac535d8
- Version Created
- April 13, 2023