tomasmcm/go-bruins-v2 📝🔢 → 📝
About
Source: rwitz/go-bruins-v2 ✦ Quant: TheBloke/go-bruins-v2-AWQ ✦ Designed to push the boundaries of NLP applications, offering unparalleled performance in generating human-like text

Example Output
Prompt:
"
<|im_start|>system
- You are a helpful assistant chatbot.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
<|im_start|>user
Explain photosynthesis<|im_end|>
<|im_start|>assistant
Output
During photosynthesis, green plants and certain other organisms convert sunlight into chemical energy through a process that uses water and carbon dioxide from the environment. This chemical energy is stored in the form of glucose, which can be used as a source of fuel by the plant. The overall equation for photosynthesis is:
6CO₂ + 6H₂O + light energy → C₆H₁₂O₆ + 6O₂
In this equation, 6 molecules of carbon dioxide (CO₂) combine with 6 molecules of water (
Performance Metrics
4.05s
Prediction Time
31.52s
Total Time
All Input Parameters
{ "top_k": -1, "top_p": 0.95, "prompt": "<|im_start|>system\n- You are a helpful assistant chatbot.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>\n<|im_start|>user\nExplain photosynthesis<|im_end|>\n<|im_start|>assistant", "max_tokens": 128, "temperature": 0.8, "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- stop
- List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
- top_k
- Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
- top_p
- Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
- prompt (required)
- Text prompt to send to the model.
- max_tokens
- Maximum number of tokens to generate per output sequence.
- temperature
- Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
- presence_penalty
- Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
- frequency_penalty
- Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema
Output
Example Execution Logs
Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:03<00:00, 3.97s/it] Processed prompts: 100%|██████████| 1/1 [00:03<00:00, 3.97s/it] Generated 128 tokens in 3.9865102767944336 seconds.
Version Details
- Version ID
a201caf3e3847babfb7ebfe8d01fe8e4a3a5bce04e044ff793cc05fceed1b198
- Version Created
- December 10, 2023