google-deepmind/gemma-7b-it πŸ”’πŸ“ β†’ πŸ“

▢️ 95.0K runs πŸ“… Feb 2024 βš™οΈ Cog 0.9.4 πŸ”— GitHub βš–οΈ License
code-generation question-answering text-generation

About

7B instruct version of Google’s Gemma model

Example Output

Prompt:

"Write me a poem about Machine Learning."

Output

In the realm of data, a tale unfolds,
Where algorithms dance, tales are told.
A machine learning model, a master of craft,
Unveils patterns, hidden in the draft.

With neural networks, it learns to see,
The secrets embedded in data, free.
From images to text, it can perceive,
And make decisions, beyond belief.

The model's mind, a labyrinth of code,
Uncovers insights, secrets to be told.
It finds relationships, hidden in the past,
And predicts future trends, built to last.

But beware, dear friend, of biases blind,
For algorithms can amplify what's wrong.
In fairness and justice, we must stand,
To ensure that AI serves our hand.

So let us harness this powerful force,
For the betterment of humanity's course.
With machine learning, we can soar,
To heights we've never reached before.

A journey into the future, a tale to be told,
Where machine learning transforms the old.
A tale of progress, a tale of might,
A future where data shines so bright.

Performance Metrics

8.94s Prediction Time
8.98s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "prompt": "Write me a poem about Machine Learning.",
  "temperature": 0.7,
  "max_new_tokens": 512,
  "min_new_tokens": -1,
  "repetition_penalty": 1
}
Input Parameters
top_k Type: integerDefault: 50
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.95Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt Type: stringDefault: Write me a poem about Machine Learning.
Prompt to send to the model.
temperature Type: numberDefault: 0.7Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 200Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
repetition_penalty Type: numberDefault: 1.15Range: 0 - ∞
A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema

Output

Type: array β€’ Items Type: string

Version Details
Version ID
2790a695e5dcae15506138cc4718d1106d0d475e6dca4b1d43f42414647993d5
Version Created
February 21, 2024
Run on Replicate β†’