lucataco/ollama-reflection-70b 🔢📝 → 📝

▶️ 1.6K runs 📅 Sep 2024 ⚙️ Cog 0.9.20 🔗 GitHub 📄 Paper ⚖️ License
code-generation question-answering reasoning text-generation

About

Ollama Reflection 70b

Example Output

Prompt:

"How many r's are there in the word Strawberry? Think carefully"

Output

Let's break down this question step by step:
  1. First, we need to spell out "Strawberry":
    S-T-R-A-W-B-E-R-R-Y

  2. Now, let's count the R's:

    I should double-check my count as the question asks me to think carefully. Counting from left to right: - There is one R in the third position (S-T-R-...) - Then there are two more R's at the end of the word (...R-R-Y)

    So, 1 + 2 = 3

Therefore, there are 3 R's in the word "Strawberry".

There are 3 R's in the word "Strawberry".

Performance Metrics

6.11s Prediction Time
6.12s Total Time
All Input Parameters
{
  "top_p": 0.9,
  "prompt": "How many r's are there in the word Strawberry? Think carefully",
  "max_tokens": 256,
  "temperature": 0.7
}
Input Parameters
top_p Type: numberDefault: 0.95Range: 0 - 1
Controls diversity of the output. Lower values make the output more focused, higher values make it more diverse.
prompt (required) Type: string
Input text for the model
max_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate
temperature Type: numberDefault: 0.7Range: 0 - 1
Controls randomness. Lower values make the model more deterministic, higher values make it more random.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Total runtime: 6.111316204071045
Version Details
Version ID
b9d9cc896a85eaff33af0223b1e615ab5c46a406f562450d97c41960e6fc40b4
Version Created
September 9, 2024
Run on Replicate →