google-deepmind/gemma-2b 🔢📝 → 📝

▶️ 2.5K runs 📅 Feb 2024 ⚙️ Cog 0.9.4 🔗 GitHub ⚖️ License
text-generation

About

2B base version of Google’s Gemma model

Example Output

Prompt:

"Write me a poem about Machine Learning."

Output

I will not be afraid
To say that I do not know, and yet believe.
I may even write you one myself.

But please don’t let this turn into an argument.
It is time to give up the old arguments now:
What if? What if? Why should we care?

We are so much more than our neurons;
A few hundred thousand of them at most —
And they only represent 20% of our brain!

You can use your own body as data
With all those sensors on it every day.
Let us count how many steps you take in a day…

I am sure that no machine learning algorithm could ever beat the human mind at any game or activity.
As long as man remains rational …

I love logic, and mathematics, too.
When I look for something new to learn,
I always begin with some kind of math.

I was once told by a friend, who believed he knew everything,
That “All things exist because there is a God.”
But then I realized what really exists today:

“There is a God” means “God has never existed!”

Performance Metrics

5.41s Prediction Time
5.42s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "prompt": "Write me a poem about Machine Learning.",
  "temperature": 0.7,
  "max_new_tokens": 256,
  "min_new_tokens": -1,
  "repetition_penalty": 1.15
}
Input Parameters
top_k Type: integerDefault: 50
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.95Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt Type: stringDefault: Write me a poem about Machine Learning.
Prompt to send to the model.
temperature Type: numberDefault: 0.7Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 200Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
repetition_penalty Type: numberDefault: 1Range: 0 - ∞
A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema

Output

Type: arrayItems Type: string

Version Details
Version ID
26b2c530f16236a4816611509730c2e6f7b27875a6d33ec5cff42961750c98d8
Version Created
February 21, 2024
Run on Replicate →