tomasmcm/docsgpt-7b-mistral 📝🔢 → 📝

▶️ 78 runs 📅 Dec 2023 ⚙️ Cog 0.8.6 📄 Paper ⚖️ License
contextual-answers developer-tools documentation question-answering rag technical-support text-generation

About

Source: Arc53/docsgpt-7b-mistral ✦ Quant: TheBloke/docsgpt-7B-mistral-AWQ ✦ DocsGPT is optimized for Documentation (RAG), fine-tuned for providing answers that are based on context

Example Output

Prompt:

"

Instruction

When was Aquaman and the Lost Kingdom released?

Context

Aquaman and the Lost Kingdom premiered at a fan event at the Grove, Los Angeles on December 19, 2023, and was released in the United States on December 22, by Warner Bros. Pictures. The film received mixed reviews from critics, who deemed it inferior to its predecessor. The film has grossed $138 million worldwide.

Answer

"

Output

Aquaman and the Lost Kingdom was released on December 22, 2023, in the United States by Warner Bros. Pictures.

Performance Metrics

0.68s Prediction Time
110.97s Total Time
All Input Parameters
{
  "top_k": -1,
  "top_p": 0.95,
  "prompt": "### Instruction\nWhen was Aquaman and the Lost Kingdom released?\n### Context\nAquaman and the Lost Kingdom premiered at a fan event at the Grove, Los Angeles on December 19, 2023, and was released in the United States on December 22, by Warner Bros. Pictures. The film received mixed reviews from critics, who deemed it inferior to its predecessor. The film has grossed $138 million worldwide.\n### Answer",
  "max_tokens": 128,
  "temperature": 0.8,
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
stop Type: string
List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
top_k Type: integerDefault: -1
Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
top_p Type: numberDefault: 0.95Range: 0.01 - 1
Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
prompt (required) Type: string
Text prompt to send to the model.
max_tokens Type: integerDefault: 128
Maximum number of tokens to generate per output sequence.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
presence_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
frequency_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00,  1.49it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00,  1.49it/s]
Generated 34 tokens in 0.673701286315918 seconds.
Version Details
Version ID
2f13e3048eb30d468bfd30bdbdc403c9d7fc4d90ccd76b6f0674ba5cb1c70f0e
Version Created
December 30, 2023
Run on Replicate →