google-deepmind/gemma-7b 🔢📝 → 📝
About
7B base version of Google’s Gemma model
Example Output
Prompt:
"Here is Machine Learning poem:"
Output
Machine learning,
A powerful tool,
But also a burden,
For it can’t be controlled.
It’s a double-edged sword,
A blessing and a curse,
A tool for good,
Or a tool for worse.
It can help us solve,
Complex problems with ease,
But it can also cause,
Damage and distress.
It’s a technology,
That’s changing the world,
But we must be careful,
Not to be misled by its charm.
We must consider,
The risks and the rewards,
And use it responsibly,
For the good of all.
So let’s embrace,
The power of machine learning,
And use it wisely,
For a better tomorrow.
If you like this poem, you may also like the following articles:
Performance Metrics
12.44s
Prediction Time
12.47s
Total Time
All Input Parameters
{
"top_k": 50,
"top_p": 0.95,
"prompt": "Here is Machine Learning poem:",
"temperature": 0.7,
"max_new_tokens": 256,
"min_new_tokens": -1,
"repetition_penalty": 1
}
Input Parameters
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- repetition_penalty
- A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema
Output
Version Details
- Version ID
2ca65f463a2c0cfef4dbc4ba70d227ed96455ef6020c1f6983b2a4c4f3ecb4ec- Version Created
- February 21, 2024