google-deepmind/gemma2-9b-it 🔢📝 → 📝

▶️ 31.1K runs 📅 Jun 2024 ⚙️ Cog 0.9.10 🔗 GitHub ⚖️ License
code-generation question-answering text-generation

About

Google's Gemma2 9b instruct model

Example Output

Prompt:

"Write me a poem about Machine Learning."

Output

In silicon valleys, where data flows deep,
A mind awakens, from slumber it leaps.

No flesh and bone, but algorithms bright,
Learning patterns, day and night.

From numbers born, with logic's embrace,
It sifts through the world at its own pace.

Each input a seed, in knowledge sown,
Connections forged, wisdom grown.

Classifying images, predicting the flow,
Of language and stock, making sense of the show.

But questions arise, as it learns to perceive,
Can ethics be built, can feelings conceive?

The line blurs, what defines, the human art,
When machines learn, and play their own part.

Intrigue and fear, in equal measure,
As we share our world, with this new pleasure.

A tool, a partner, a foe in disguise,
Machine learning, with hope in its eyes.

The future unfolds, a mystery untold,
Let's guide the journey, not lose control.

For in this dance, with code and design,
We shape the future, one algorithm at a time.

Performance Metrics

15.61s Prediction Time
15.62s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.9,
  "prompt": "Write me a poem about Machine Learning.",
  "temperature": 0.6,
  "max_new_tokens": 512,
  "repetition_penalty": 1.2
}
Input Parameters
top_k Type: integerDefault: 50Range: 1 - 1000
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0.05 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt Type: stringDefault: Write me a poem about Machine Learning.
Prompt to send to the model.
temperature Type: numberDefault: 0.6Range: 0.1 - 4
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 1024Range: 1 - 2048
Maximum number of tokens to generate. A word is generally 2-3 tokens
repetition_penalty Type: numberDefault: 1.2Range: 0 - ∞
A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema

Output

Type: arrayItems Type: string

Version Details
Version ID
24464993111a1b52b2ebcb2a88c76090a705950644dca3a3955ee40d80909f2d
Version Created
June 27, 2024
Run on Replicate →