lucataco/upstage-llama-2-70b-instruct-v2 🔢📝 → 📝

▶️ 3.3K runs 📅 Aug 2023 ⚙️ Cog 0.8.1 🔗 GitHub
code-generation question-answering text-generation

About

Upstage/Llama-2-70B-instruct-v2 - GPTQ

Example Output

Prompt:

"Tell me about AI"

Output

AI stands for Artificial Intelligence, which refers to the development of computer systems that can perform tasks that would normally require human intelligence, such as learning, problem-solving, decision making, and language understanding. AI technology has been rapidly advancing in recent years, with applications ranging from virtual assistants like Siri and Alexa, to self-driving cars, medical diagnosis, and even creating art and music. There are various approaches to developing AI, including machine learning, deep learning, and neural networks, all aimed at enabling machines to learn from data and improve their performance over time. While AI holds great potential for improving our lives and solving complex problems, it also raises important ethical questions about its impact on society and the future of work.

Performance Metrics

90.35s Prediction Time
785.49s Total Time
All Input Parameters
{
  "top_p": 0.95,
  "prompt": "Tell me about AI",
  "temperature": 0.75,
  "system_prompt": "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.",
  "max_new_tokens": 512,
  "repetition_penalty": 1.1
}
Input Parameters
top_p Type: numberDefault: 0.95Range: 0.01 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt Type: stringDefault: Tell me about AI
Prompt to send to model
temperature Type: numberDefault: 0.75Range: 0 - 5
Randomness of outputs, 0 is deterministic, greater than 1 is random
system_prompt Type: stringDefault: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
System prompt that helps guide system behavior
max_new_tokens Type: integerDefault: 512Range: 1 - 2048
Number of new tokens
repetition_penalty Type: numberDefault: 1.1Range: 0 - 5
Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it
Output Schema

Output

Type: string

Version Details
Version ID
4a954f75869cae9f2b39a6b28fb978764a93b3f50b8672fb7184f4e5600ddd8b
Version Created
August 11, 2023
Run on Replicate →