nateraw/llama-2-7b-paraphrase-v1 🔢✓📝 → 📝

▶️ 113 runs 📅 Nov 2023 ⚙️ Cog 0.8.6
paraphrasing text-generation

About

Example Output

Prompt:

"The capital of France is Paris"

Output

The capital of France is located in Paris.

Performance Metrics

9.59s Prediction Time
12.20s Total Time
All Input Parameters
{
  "debug": false,
  "top_k": 50,
  "top_p": 0.9,
  "prompt": "The capital of France is Paris\n",
  "temperature": 0.75,
  "max_new_tokens": 128,
  "min_new_tokens": -1
}
Input Parameters
seed Type: integer
Random seed. Leave blank to randomize the seed
debug Type: booleanDefault: false
provide debugging output in logs
top_p Type: numberDefault: 0.95Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
temperature Type: numberDefault: 0.7Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
return_logits Type: booleanDefault: false
if set, only return logits for the first token. only useful for testing, etc.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens Type: integerDefault: -1Range: -1 - ∞
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
replicate_weights Type: string
Path to fine-tuned weights produced by a Replicate fine-tune job.
repetition_penalty Type: numberDefault: 1.15Range: 0 - ∞
A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Your formatted prompt is:
The capital of France is Paris
previous weights were different, switching to https://replicate.delivery/pbxt/TVlCb4hSte1nUqEMOrIjrcnitL1WlK3l5vOzb99Voz2fOb4RA/training_output.zip
Downloading peft weights
using https://replicate.delivery/pbxt/TVlCb4hSte1nUqEMOrIjrcnitL1WlK3l5vOzb99Voz2fOb4RA/training_output.zip instead of https://replicate.delivery/pbxt/TVlCb4hSte1nUqEMOrIjrcnitL1WlK3l5vOzb99Voz2fOb4RA/training_output.zip
Downloaded training_output.zip as 10 824 kB chunks in 0.3440 with 0 retries
Downloaded peft weights in 0.344
Unzipped peft weights in 0.010
Initialized peft model in 0.006
Overall initialize_peft took 9.265
Exllama: False
INFO 11-14 23:28:47 async_llm_engine.py:371] Received request 0: prompt: 'The capital of France is Paris\n', sampling params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=1.0, temperature=0.75, top_p=0.9, top_k=50, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['</s>'], ignore_eos=False, max_tokens=128, logprobs=None, skip_special_tokens=True), prompt token ids: None.
INFO 11-14 23:28:47 llm_engine.py:631] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
INFO 11-14 23:28:48 async_llm_engine.py:111] Finished request 0.
hostname: model-hs-73001d65-da72d39bf79629ac-gpu-a40-85ddd9ccc9-97bzw
Version Details
Version ID
17b76fbd699fcd4476b6d7292de8bfd1ee1b219f7ed81f0395da0631d00850cc
Version Created
November 14, 2023
Run on Replicate →