fofr/neuromancer-13b 🔢✓📝 → 📝

▶️ 152 runs 📅 Sep 2023 ⚙️ Cog 0.8.1
cyberpunk text-generation text-style-transfer

About

llama-13b-base fine-tuned on Neuromancer style

Example Output

Prompt:

"Given the following input, rewrite it in the style of neuromancer:
[TO_NEURO_REWRITE] John needed coffee. [/TO_NEURO_REWRITE]
[NEUROMANCED] "

Output

John needed caffeine. He could have had a cup of tea or even black coffee, but he wanted that stimulant that made him feel like his brain was parting from his skull with a sucking sound as it expanded and grew sharper than any surgeon's knife. The cold liquid hit his stomach like an icepick, sending waves of nausea through his body. A smile crossed his face.

Performance Metrics

2.27s Prediction Time
8.22s Total Time
All Input Parameters
{
  "debug": false,
  "top_k": 55,
  "top_p": 0.9,
  "prompt": "Given the following input, rewrite it in the style of neuromancer:\n[TO_NEURO_REWRITE] John needed coffee. [/TO_NEURO_REWRITE]\n[NEUROMANCED] ",
  "temperature": 0.75,
  "max_new_tokens": 1000,
  "min_new_tokens": 100,
  "stop_sequences": "[/NEUROMANCED]"
}
Input Parameters
seed Type: integer
Random seed. Leave blank to randomize the seed
debug Type: booleanDefault: false
provide debugging output in logs
top_k Type: integerDefault: 50Range: 0 - ∞
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
replicate_weights Type: string
Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema

Output

Type: arrayItems Type: string

Version Details
Version ID
e8f6fe03615d2ed962f4859d1d55e4d5497ddb357c6cb354f18b8de450cbee58
Version Created
September 13, 2023
Run on Replicate →