tomasmcm/neuronovo-7b-v0.3 📝🔢 → 📝
About
Source: Neuronovo/neuronovo-7B-v0.3 ✦ Quant: TheBloke/neuronovo-7B-v0.3-AWQ ✦ Neuronovo/neuronovo-7B-v0.3 model represents an advanced and fine-tuned version of a large language model, initially based on CultriX/MistralTrix-v1.

Example Output
Prompt:
"<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Give me a tasty recipe for some noodle dish.<|im_end|>
<|im_start|>assistant"
Output
Here is a delicious and easy-to-make recipe for Chicken Pad Thai, which is a popular Thai noodle dish. It's perfect for lunch or dinner and serves around 4 people.
Ingredients:
- 8 oz (225g) rice noodles
- 2 tablespoons vegetable oil
- 1 large egg, beaten
- 1 lb (450g) boneless chicken breasts or thighs, thinly sliced
- 2 tablespoons fish sauce
- 2 tablespoons soy sauce
- 1 tablespoon rice vinegar
- 1 tablespoon sugar (or to taste)
- 1 tablespoon sriracha sauce (or to taste)
- 1 bell pepper (any color), sliced into thin strips
- 1 small carrot, julienned or cut into thin strips
- 1 cup (100g) bean sprouts
- 3 scallions, cut into 2-inch (5cm) pieces
- 1/4 cup (20g) chopped roasted peanuts
- 1/4 cup (1
Performance Metrics
5.93s
Prediction Time
5.97s
Total Time
All Input Parameters
{ "top_k": -1, "top_p": 0.95, "prompt": "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nGive me a tasty recipe for some noodle dish.<|im_end|>\n<|im_start|>assistant\n", "max_tokens": 256, "temperature": 0.8, "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- stop
- List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
- top_k
- Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
- top_p
- Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
- prompt (required)
- Text prompt to send to the model.
- max_tokens
- Maximum number of tokens to generate per output sequence.
- temperature
- Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
- presence_penalty
- Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
- frequency_penalty
- Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema
Output
Example Execution Logs
Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:05<00:00, 5.87s/it] Processed prompts: 100%|██████████| 1/1 [00:05<00:00, 5.87s/it] Generated 256 tokens in 5.86732029914856 seconds.
Version Details
- Version ID
2447da54222e6a4bb51657960962f643dd24f55e11004d4dd908e1efab027cd3
- Version Created
- January 12, 2024