meta/llama-2-70b-chat 🔢✓📝 → 📝

⭐ Official ▶️ 9.9M runs 📅 Jul 2023 ⚙️ Cog 0.8.1 🔗 GitHub 📄 Paper ⚖️ License
chatbot code-generation summarization text-generation

About

A 70 billion parameter language model from Meta, fine tuned for chat completions

Example Output

Prompt:

"Can you write a poem about open source machine learning? Let's make it in the style of E. E. Cummings."

Output

Sure, here's a poem about open source machine learning in the style of E. E. Cummings:

open source machine learning
a gift to the world, a treasure trove
of knowledge, free for all to use
and learn from, like a digital muse

with code that's open and transparent
anyone can see, anyone can tinker
and improve upon, a collaborative dance
of innovation, without a single anchor

the community shares and shapes it
together they build and refine
a tool that's greater than the sum of its parts
a true democracy of mind and design

from deep learning to natural language
it's all here, a boundless expanse
of possibility, a playground for thought
where anyone can explore and advance

so come and join this movement
embrace the power, the beauty, the grace
of open source machine learning
and watch the future unfold with ease.

Performance Metrics

4.70s Prediction Time
4.23s Total Time
All Input Parameters
{
  "debug": false,
  "top_p": 1,
  "prompt": "Can you write a poem about open source machine learning? Let's make it in the style of E. E. Cummings.",
  "temperature": 0.5,
  "system_prompt": "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.",
  "max_new_tokens": 500,
  "min_new_tokens": -1
}
Input Parameters
seed Type: integer
Random seed. Leave blank to randomize the seed
debug Type: booleanDefault: false
provide debugging output in logs
top_k Type: integerDefault: 50Range: 0 - ∞
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
system_prompt Type: stringDefault: You are a helpful assistant.
System prompt to send to the model. This is prepended to the prompt and helps guide system behavior.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
replicate_weights Type: string
Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
MLC is currently not using any LoRAs.
Your formatted prompt is:
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
Can you write a poem about open source machine learning? Let's make it in the style of E. E. Cummings. [/INST]
Not using LoRA
hostname: model-d-c6559c5791b50af57b69f4a73f8e021c-7d68c66789-ntgcq
Version Details
Version ID
02e509c789964a7ea8736978a43525956ef40397be9033abf9fd2badfe68c9e3
Version Created
September 13, 2023
Run on Replicate →