tomasmcm/synthia-13b-v1.2 🔢📝 → 📝

▶️ 593 runs 📅 Sep 2023 ⚙️ Cog 0.8.6 📄 Paper ⚖️ License
instruction-following llama-2 long-form-conversation orca-dataset reasoning text-generation

About

Source: migtissera/Synthia-13B-v1.2 ✦ Quant: TheBloke/Synthia-13B-v1.2-AWQ ✦ SynthIA (Synthetic Intelligent Agent) is a LLama-2-13B model trained on Orca style datasets

Example Output

Prompt:

"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:"

Output

Tree of Thoughts:

  1. Rocket launch process:
    a. Ignition and lift-off
    b. Powered flight phase
    c. Transition to sub-orbital flight
    d. Earth orbit insertion
    e. Deployment of satellites or payload
  2. Key components:
    a. Fuel and oxidizer
    b. Engines and thrusters
    c. Navigation and guidance systems
    d. Structural integrity
  3. Challenges in space travel:
    a. Launch vehicle design
    b. Weather

Performance Metrics

2.38s Prediction Time
118.94s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "prompt": "SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.\nUSER: How is a rocket launched from the surface of the earth to Low Earth Orbit?\nASSISTANT:",
  "temperature": 0.8,
  "max_new_tokens": 128,
  "presence_penalty": 1
}
Input Parameters
top_k Type: integerDefault: 50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
top_p Type: numberDefault: 0.95Range: 0.01 - 1
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
prompt (required) Type: string
Text prompt to send to the model.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
The value used to modulate the next token probabilities.
max_new_tokens Type: integerDefault: 128
The maximum number of tokens the model should generate as output.
presence_penalty Type: numberDefault: 1Range: 0.01 - 5
Presence penalty
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.25s/it]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.25s/it]
Generated 128 tokens in 2.2487902641296387 seconds.
Version Details
Version ID
05dc68e69258cd78118742320ad7be18aafd1c66b099586826965d072253f27e
Version Created
October 23, 2023
Run on Replicate →