google-deepmind/gemma2-9b-it 🔢📝 → 📝
About
Google's Gemma2 9b instruct model

Example Output
"Write me a poem about Machine Learning."
Output
In silicon valleys, where data flows deep,
A mind awakens, from slumber it leaps.
No flesh and bone, but algorithms bright,
Learning patterns, day and night.
From numbers born, with logic's embrace,
It sifts through the world at its own pace.
Each input a seed, in knowledge sown,
Connections forged, wisdom grown.
Classifying images, predicting the flow,
Of language and stock, making sense of the show.
But questions arise, as it learns to perceive,
Can ethics be built, can feelings conceive?
The line blurs, what defines, the human art,
When machines learn, and play their own part.
Intrigue and fear, in equal measure,
As we share our world, with this new pleasure.
A tool, a partner, a foe in disguise,
Machine learning, with hope in its eyes.
The future unfolds, a mystery untold,
Let's guide the journey, not lose control.
For in this dance, with code and design,
We shape the future, one algorithm at a time.
Performance Metrics
All Input Parameters
{ "top_k": 50, "top_p": 0.9, "prompt": "Write me a poem about Machine Learning.", "temperature": 0.6, "max_new_tokens": 512, "repetition_penalty": 1.2 }
Input Parameters
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- repetition_penalty
- A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema
Output
Version Details
- Version ID
24464993111a1b52b2ebcb2a88c76090a705950644dca3a3955ee40d80909f2d
- Version Created
- June 27, 2024