google-deepmind/gemma-2b-it π’π β π
About
2B instruct version of Googleβs Gemma model

Example Output
Prompt:
"Write me a poem about Machine Learning."
Output
Machines learn, they never sleep,
Algorithms spin, a pattern to keep.
From data's vast and ever-growing sea,
They weave insights, a knowledge to believe.
Supervised learning, a guiding hand,
Training data, shaping the sand.
Unsupervised, finding hidden gems,
Clustering data, revealing their fears.
Reinforcement, a learning spree,
Learning from mistakes, a path to believe.
Deep learning, a hidden black box,
Extracting patterns, where once there was no shock.
The future holds, a world transformed,
With machines learning, we are transformed.
A symphony of algorithms, a human guide,
Igniting a future, where possibilities reside.
Performance Metrics
5.05s
Prediction Time
132.02s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "prompt": "Write me a poem about Machine Learning.", "temperature": 0.7, "max_new_tokens": 200, "min_new_tokens": -1, "repetition_penalty": 1.15 }
Input Parameters
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- repetition_penalty
- A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema
Output
Version Details
- Version ID
dff94eaf770e1fc211e425a50b51baa8e4cac6c39ef074681f9e39d778773626
- Version Created
- February 21, 2024