tomasmcm/evolved-seeker-1.3b 📝🔢 → 📝

▶️ 30 runs 📅 Nov 2023 ⚙️ Cog 0.8.6 📄 Paper ⚖️ License
chatml-support code-generation coding-assistant low-resource text-to-code

About

Source: TokenBender/evolvedSeeker_1_3 ✦ Quant: TheBloke/evolvedSeeker_1_3-AWQ ✦ A fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs

Example Output

Prompt:

"<|im_start|>system
You are a helpful coding assistant.<|im_end|>
<|im_start|>user
write a javascript program to reverse letters in each word in a sentence without reversing order of words in the sentence.<|im_end|>
<|im_start|>assistant"

Output

Here is a simple JavaScript program that achieves this:

function reverseWordsInSentence(sentence) {
    return sentence.split(' ').map(word => word.split('').reverse().join('')).join(' ');
}

console.log(reverseWordsInSentence("Hello world")); // Outputs: "olleH dlrow"

This function works by first splitting the sentence into an array of words. Then, for each word, it splits the word into an array of characters, reverses that array, then joins it back together into a word. Finally, it joins all the words back together into a sentence.

Performance Metrics

4.89s Prediction Time
4.91s Total Time
All Input Parameters
{
  "stop": "<|im_end|>",
  "top_k": 40,
  "top_p": 0.95,
  "prompt": "<|im_start|>system\nYou are a helpful coding assistant.<|im_end|>\n<|im_start|>user\nwrite a javascript program to reverse letters in each word in a sentence without reversing order of words in the sentence.<|im_end|>\n<|im_start|>assistant",
  "max_tokens": 512,
  "temperature": 0.7,
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
stop Type: string
List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
top_k Type: integerDefault: -1
Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
top_p Type: numberDefault: 0.95Range: 0.01 - 1
Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
prompt (required) Type: string
Text prompt to send to the model.
max_tokens Type: integerDefault: 128
Maximum number of tokens to generate per output sequence.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
presence_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
frequency_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:04<00:00,  4.88s/it]
Processed prompts: 100%|██████████| 1/1 [00:04<00:00,  4.88s/it]
Generated 158 tokens in 4.881830453872681 seconds.
Version Details
Version ID
04492f3607a921344a8173a727224dc1ea22518ea9d40b8231296afe29fb972a
Version Created
November 28, 2023
Run on Replicate →