tomasmcm/metamath-mistral-7b 🔢📝 → 📝

▶️ 396 runs 📅 Nov 2023 ⚙️ Cog 0.8.6 🔗 GitHub 📄 Paper ⚖️ License
educational math-reasoning mathematics problem-solving research text-generation

About

Source: meta-math/MetaMath-Mistral-7B ✦ Quant: TheBloke/MetaMath-Mistral-7B-AWQ ✦ Bootstrap Your Own Mathematical Questions for Large Language Models

Example Output

Prompt:

"

Below is an instruction that describes a task.
Write a response that appropriately completes the request.

Instruction:

Gerald works at a daycare that pays him $30 every day. He worked for an entire week and spent a total of $100. How much does he have left?

Response: Let's think step by step.

"

Output

Gerald earns $30 every day, so in a week, he earns 7 * $30 = $210.
If he spent a total of $100, then he has $210 - $100 = $110 left.

110

The answer is: 110

Performance Metrics

1.02s Prediction Time
119.65s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "prompt": "Below is an instruction that describes a task.\nWrite a response that appropriately completes the request.\n\n### Instruction:\nGerald works at a daycare that pays him $30 every day. He worked for an entire week and spent a total of $100. How much does he have left?\n\n### Response: Let's think step by step.",
  "temperature": 0.8,
  "max_new_tokens": 128,
  "presence_penalty": 1
}
Input Parameters
top_k Type: integerDefault: 50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
top_p Type: numberDefault: 0.95Range: 0.01 - 1
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
prompt (required) Type: string
Text prompt to send to the model.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
The value used to modulate the next token probabilities.
max_new_tokens Type: integerDefault: 128
The maximum number of tokens the model should generate as output.
presence_penalty Type: numberDefault: 1Range: 0.01 - 5
Presence penalty
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00,  1.05it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00,  1.05it/s]
Generated 79 tokens in 0.9599916934967041 seconds.
Version Details
Version ID
2873b99ace018acc5ea83756c5773bf89eeb7db35eb882c29a3eec8bba99133e
Version Created
November 13, 2023
Run on Replicate →