meta/llama-2-70b 🔢✓📝 → 📝
About
Base version of Llama 2, a 70 billion parameter language model from Meta.

Example Output
Prompt:
"
original prompt: garden with flowers and dna strands
improved prompt: psychedelic 3d vector art illustration of garden full of colorful double helix dna strands and exotic flowers by lisa frank, beeple and tim hildebrandt, hyper realism, art deco, intricate, elegant, highly detailed, unreal engine, octane render, smooth
original prompt: humanoid plant monster
improved prompt:
Output
3d vector art illustration of a humanoid plant monster with green skin and vivid colorful paisley patterned leaves, glowing red eyes and sharp teeth, hyper realism, art deco
Performance Metrics
8.47s
Prediction Time
8.55s
Total Time
All Input Parameters
{ "top_p": 1, "prompt": "original prompt: garden with flowers and dna strands\nimproved prompt: psychedelic 3d vector art illustration of garden full of colorful double helix dna strands and exotic flowers by lisa frank, beeple and tim hildebrandt, hyper realism, art deco, intricate, elegant, highly detailed, unreal engine, octane render, smooth\n\noriginal prompt: humanoid plant monster\nimproved prompt: ", "max_length": 150, "temperature": 0.75, "repetition_penalty": 1 }
Input Parameters
- seed
- Random seed. Leave blank to randomize the seed
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
- replicate_weights
- Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema
Output
Version Details
- Version ID
a52e56fee2269a78c9279800ec88898cecb6c8f1df22a6483132bea266648f00
- Version Created
- September 13, 2023