meta/llama-2-70b 🔢✓📝 → 📝

⭐ Official ▶️ 359.6K runs 📅 Jul 2023 ⚙️ Cog 0.8.1 🔗 GitHub 📄 Paper ⚖️ License
code-generation question-answering text-generation text-translation

About

Base version of Llama 2, a 70 billion parameter language model from Meta.

Example Output

Prompt:

"

original prompt: garden with flowers and dna strands
improved prompt: psychedelic 3d vector art illustration of garden full of colorful double helix dna strands and exotic flowers by lisa frank, beeple and tim hildebrandt, hyper realism, art deco, intricate, elegant, highly detailed, unreal engine, octane render, smooth

original prompt: humanoid plant monster
improved prompt:

"

Output

3d vector art illustration of a humanoid plant monster with green skin and vivid colorful paisley patterned leaves, glowing red eyes and sharp teeth, hyper realism, art deco

Performance Metrics

8.47s Prediction Time
8.55s Total Time
All Input Parameters
{
  "top_p": 1,
  "prompt": "original prompt: garden with flowers and dna strands\nimproved prompt: psychedelic 3d vector art illustration of garden full of colorful double helix dna strands and exotic flowers by lisa frank, beeple and tim hildebrandt, hyper realism, art deco, intricate, elegant, highly detailed, unreal engine, octane render, smooth\n\noriginal prompt: humanoid plant monster\nimproved prompt: ",
  "max_length": 150,
  "temperature": 0.75,
  "repetition_penalty": 1
}
Input Parameters
seed Type: integer
Random seed. Leave blank to randomize the seed
debug Type: booleanDefault: false
provide debugging output in logs
top_k Type: integerDefault: 50Range: 0 - ∞
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
replicate_weights Type: string
Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema

Output

Type: arrayItems Type: string

Version Details
Version ID
a52e56fee2269a78c9279800ec88898cecb6c8f1df22a6483132bea266648f00
Version Created
September 13, 2023
Run on Replicate →