meta/llama-2-13b πŸ”’βœ“πŸ“ β†’ πŸ“

⭐ Official ▢️ 209.1K runs πŸ“… Aug 2023 βš™οΈ Cog 0.8.1 πŸ”— GitHub πŸ“„ Paper βš–οΈ License
code-generation text-generation

About

Base version of Llama 2 13B, a 13 billion parameter language model

Example Output

Prompt:

"Once upon a time a llama explored"

Output

the wilderness with his mother.
He was hungry and looked for food. He searched everywhere but could not find anything to eat. Finally, he found some lettuce growing in the wild. He began eating it happily when suddenly there was a terrible rumble of thunder and the ground shook underneath him! The llama quickly ran away from this dangerous place before lightning struck down on top of him or worse yet…being eaten alive by an angry bear!
The moral of the story is that you should always be prepared for disasters like these because they can happen at any time without

Performance Metrics

2.81s Prediction Time
2.68s Total Time
All Input Parameters
{
  "debug": false,
  "top_k": 50,
  "top_p": 0.9,
  "prompt": "Once upon a time a llama explored",
  "temperature": 0.75,
  "max_new_tokens": 128,
  "min_new_tokens": -1
}
Input Parameters
seed Type: integer
Random seed. Leave blank to randomize the seed
debug Type: booleanDefault: false
provide debugging output in logs
top_k Type: integerDefault: 50Range: 0 - ∞
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
replicate_weights Type: string
Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema

Output

Type: array β€’ Items Type: string

Example Execution Logs
Your formatted prompt is:
Once upon a time a llama explored
Version Details
Version ID
078d7a002387bd96d93b0302a4c03b3f15824b63104034bfa943c63a8f208c38
Version Created
September 13, 2023
Run on Replicate β†’