tomasmcm/zephyr-7b-beta 🔢📝 → 📝
About
Source: HuggingFaceH4/zephyr-7b-beta ✦ Quant: TheBloke/zephyr-7B-beta-AWQ ✦ Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series
Example Output
"<|system|>
<|user|>
Tell me about AI
<|assistant|>"
Output
Artificial intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence, such as learning, reasoning, and decision-making. AI technologies are designed to enable machines to process massive amounts of data quickly and accurately, identify patterns, and make informed decisions based on that information.
The use of AI has expanded rapidly in recent years, with applications ranging from virtual assistants like Siri and Alexa to autonomous vehicles and medical diagnostics. In addition to these practical applications, researchers and developers continue to explore the potential of AI to revolutionize a wide range of fields, including education,
Performance Metrics
All Input Parameters
{
"top_k": 50,
"top_p": 0.95,
"prompt": "<|system|>\n</s>\n<|user|>\nTell me about AI</s>\n<|assistant|>",
"temperature": 0.8,
"max_new_tokens": 128,
"presence_penalty": 1
}
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- prompt (required)
- Text prompt to send to the model.
- temperature
- The value used to modulate the next token probabilities.
- max_new_tokens
- The maximum number of tokens the model should generate as output.
- presence_penalty
- Presence penalty
Output Schema
Output
Example Execution Logs
Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:01<00:00, 1.56s/it] Processed prompts: 100%|██████████| 1/1 [00:01<00:00, 1.56s/it] Generated 128 tokens in 1.5597317218780518 seconds.
Version Details
- Version ID
961cd6665b811d0c43c0b9488b6dfa85ff5c7bfb875e93b4533e4c7f96c7c526- Version Created
- October 29, 2023