stability-ai/stablelm-tuned-alpha-7b 🔢📝 → 📝

▶️ 140.6K runs 📅 Apr 2023 ⚙️ Cog 0.7.0-beta17 🔗 GitHub ⚖️ License
document-summarization question-answering text-generation

About

7 billion parameter version of Stability AI's language model

Example Output

Prompt:

"How do you make ratatouille?"

Output

First start with some lovely vegetables like eggplant or zucchini - whatever you fancy! Then cook them up nice and tender. Next put on some herbs if you want something special, such as thyme or rosemary - maybe a sprinkling of smoked paprika for extra flavor. Once everything's cooked through stir it all together well so each spoonful contains delicious little bits of veggies, herbs and flavourings. And serve over plenty of good quality white rice – either keep it simple with basmati or let your guests choose their own pasta from amongst several available options... The end-result should look delectable too!

Performance Metrics

3.56s Prediction Time
3.66s Total Time
All Input Parameters
{
  "top_p": 1,
  "prompt": "How do you make ratatouille?",
  "max_tokens": 500,
  "temperature": 0.75,
  "repetition_penalty": 1.2
}
Input Parameters
top_p Type: numberDefault: 1Range: 0.01 - 1
Valid if you choose top_p decoding. When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt Type: stringDefault: What's your mood today?
Input Prompt.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 100Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
repetition_penalty Type: numberDefault: 1.2Range: 0.01 - 5
Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
Output Schema

Output

Type: string

Example Execution Logs
/root/.pyenv/versions/3.8.16/lib/python3.8/site-packages/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset
warnings.warn(
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
Version Details
Version ID
943c4afb4d0273cf1cf17c1070e182c903a9fe6b372df36b5447cf45935c42f2
Version Created
April 19, 2023
Run on Replicate →