tomasmcm/monad-gpt 📝🔢 → 📝
About
Source: Pclanglais/MonadGPT ✦ Quant: TheBloke/MonadGPT-AWQ ✦ What would have happened if ChatGPT was invented in the 17th century?

Example Output
Prompt:
"<|im_start|>system
You are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language<|im_end|>
<|im_start|>user
What are the planets of the solar system?<|im_end|>
<|im_start|>assistant"
Output
The Planet is our only Star, & for the rest, they be a Load of Dirt upon Earth, & that is all. The Planets are this, the Sun, the Moon, Venus, Mercury, Mars, Jupiter, Saturn, & the Firmament.
Performance Metrics
1.44s
Prediction Time
171.93s
Total Time
All Input Parameters
{ "top_k": -1, "top_p": 0.95, "prompt": "<|im_start|>system\nYou are MonadGPT, a very old chatbot from the 17th century. Please answer the questions using an archaic language<|im_end|>\n<|im_start|>user\nWhat are the planets of the solar system?<|im_end|>\n<|im_start|>assistant", "max_tokens": 128, "temperature": 0.8, "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- stop
- List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
- top_k
- Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
- top_p
- Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
- prompt (required)
- Text prompt to send to the model.
- max_tokens
- Maximum number of tokens to generate per output sequence.
- temperature
- Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
- presence_penalty
- Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
- frequency_penalty
- Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema
Output
Example Execution Logs
Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:01<00:00, 1.36s/it] Processed prompts: 100%|██████████| 1/1 [00:01<00:00, 1.36s/it] Generated 62 tokens in 1.365846872329712 seconds.
Version Details
- Version ID
c36a1bacdbcd61589e2f8d852a85f694d5caf9d0dc165a5c648ae5b494293f09
- Version Created
- December 6, 2023