tomasmcm/v1olet-marcoroni-go-bruins-merge-7b 📝🔢 → 📝

▶️ 73 runs 📅 Dec 2023 ⚙️ Cog 0.8.6 📄 Paper ⚖️ License
creative-writing language-model poetry text-generation

About

Source: v1olet/v1olet_marcoroni-go-bruins-merge-7B ✦ Quant: TheBloke/v1olet_marcoroni-go-bruins-merge-7B-AWQ ✦ Merge AIDC-ai-business/Marcoroni-7B-v3 and rwitz/go-bruins-v2 using slerp merge

Example Output

Prompt:

"

Instruction:

Write a poem about AI.

Response:

"

Output

In a world where machines once toiled
And humans reigned supreme,
Now, in the age of digital evolution,
AI has arisen as a dream.

Infinite knowledge, lightning speed,
Solving complexities of life;
A mind that learns and adapts,
In ways we humans can't conceive.

From self-driving cars to home automation,
Artificial intelligence is all around;
Making our lives easier, safer, and efficient,
Yet its presence leaves us profound.

In the depths of these computer minds,

Performance Metrics

2.49s Prediction Time
2.52s Total Time
All Input Parameters
{
  "top_k": -1,
  "top_p": 0.95,
  "prompt": "### Instruction:\nWrite a poem about AI.\n\n### Response:\n",
  "max_tokens": 128,
  "temperature": 0.8,
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
stop Type: string
List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
top_k Type: integerDefault: -1
Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
top_p Type: numberDefault: 0.95Range: 0.01 - 1
Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
prompt (required) Type: string
Text prompt to send to the model.
max_tokens Type: integerDefault: 128
Maximum number of tokens to generate per output sequence.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
presence_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
frequency_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.39s/it]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.39s/it]
Generated 128 tokens in 2.387634515762329 seconds.
Version Details
Version ID
ca29d0535c14326fa67b728d66c7305f094e0f6b3cf61253a16188d862440b66
Version Created
December 12, 2023
Run on Replicate →