snowflake/snowflake-arctic-instruct 📝🖼️ → 📝

⭐ Official ▶️ 2.0M runs 📅 Apr 2024 ⚙️ Cog v0.10.0-alpha3+dev 🔗 GitHub ⚖️ License
code-generation text-generation

About

An efficient, intelligent, and truly open-source language model

Example Output

Prompt:

"Write fizz buzz in SQL"

Output

Fizz Buzz is a common programming problem that involves printing numbers from 1 to 100, replacing multiples of 3 with "Fizz", multiples of 5 with "Buzz", and multiples of both 3 and 5 with "FizzBuzz". Here's how you can solve this problem in SQL:

WITH RECURSIVE fizz_buzz AS (
    SELECT 1 AS num
    UNION ALL
    SELECT num + 1 FROM fizz_buzz WHERE num < 100
)
SELECT
    CASE
        WHEN MOD(num, 3) = 0 AND MOD(num, 5) = 0 THEN 'FizzBuzz'
        WHEN MOD(num, 3) = 0 THEN 'Fizz'
        WHEN MOD(num, 5) = 0 THEN 'Buzz'
        ELSE CAST(num AS CHAR)
    END AS result
FROM fizz_buzz;

Performance Metrics

3.00s Prediction Time
2.70s Total Time
All Input Parameters
{
  "top_p": 0.9,
  "prompt": "Write fizz buzz in SQL",
  "temperature": 0.2,
  "max_new_tokens": 512,
  "min_new_tokens": 0,
  "stop_sequences": "<|im_end|>",
  "prompt_template": "<|im_start|>system\nYou're a helpful assistant<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n\n<|im_start|>assistant\n",
  "presence_penalty": 1.15,
  "frequency_penalty": 0.2
}
Input Parameters
name Type: string
name_file Type: string
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
INFO 04-24 03:58:30 async_llm_engine.py:508] Received request 709f2034294045d7aacd6694b23a02aa: prompt: "<|im_start|>system\nYou're a helpful assistant<|im_end|>\n<|im_start|>user\nWrite fizz buzz in SQL<|im_end|>\n\n<|im_start|>assistant\n", sampling_params: SamplingParams(n=1, best_of=1, presence_penalty=1.15, frequency_penalty=0.2, repetition_penalty=1.0, temperature=0.6, top_p=0.9, top_k=50, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['<|im_end|>', '<|im_end|>'], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=512, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: None, lora_request: None.
 stdoutINFO 04-24 03:58:33 async_llm_engine.py:120] Finished request 709f2034294045d7aacd6694b23a02aa.
 stdoutTokens/second: 79.51895574949201
 stdout
Version Details
Version ID
081f548e9a59c93b8355abe28ca52680c8305bc8f4a186a3de62ea41b25db8dd
Version Created
April 24, 2024
Run on Replicate →