google-deepmind/gemma2-27b-it 🔢📝 → 📝

▶️ 12.9K runs 📅 Jun 2024 ⚙️ Cog 0.9.10 🔗 GitHub 📄 Paper ⚖️ License
code-generation document-summarization question-answering text-generation

About

Google's Gemma2 27b instruct model

Example Output

Prompt:

"Write me a poem about Machine Learning."

Output

From data's depths, a mind awakes,
A digital echo of patterns it takes.

It devours the numbers, the text and the signs,
Seeking hidden knowledge in intricate lines.

Algorithms whisper, secrets they tell,
Of how to predict, break free from our shell.

Each input a signal, each output a clue,
To unveil the unseen, what is true?

Patterns emerge through the noise,
As models are trained by ones and zeros deployed.
They learn from the data, refine with each pass,
Identifying trends both subtle and vast.

With every image labeled, each word understood,
The machine learns more, as its circuits get imbued
With insights that grow, like branches on trees,
Unraveling meaning with ease.

Neural networks weave their complex design,
Mimicking neurons, firing, align.
Deep learning uncovers complexity untold,
Secrets within datasets, waiting to unfold.

Decision trees branch out, exploring the way,
Through choices and options, they delve into the fray
Of decision-making, branching and choosing,
While algorithms for clustering keep the pattern accruing.

Regression analysis predicts where we stand,
Classifying outcomes, across time and land.
Support Vector

Performance Metrics

13.29s Prediction Time
13.29s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.9,
  "prompt": "Write me a poem about Machine Learning.",
  "temperature": 0.6,
  "max_new_tokens": 256,
  "repetition_penalty": 1.2
}
Input Parameters
top_k Type: integerDefault: 50Range: 1 - 1000
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0.05 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt Type: stringDefault: Write me a poem about Machine Learning.
Prompt to send to the model.
temperature Type: numberDefault: 0.6Range: 0.1 - 4
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 1024Range: 1 - 2048
Maximum number of tokens to generate. A word is generally 2-3 tokens
repetition_penalty Type: numberDefault: 1.2Range: 0 - ∞
A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema

Output

Type: arrayItems Type: string

Version Details
Version ID
81e8d7f4dd5ec1b1bec7456d6d76426e1e63a2745bd34eb7a3d7987ba4ebd80e
Version Created
June 28, 2024
Run on Replicate →