meta/codellama-7b ✓🔢📝 → 📝
About
A 7 billion parameter Llama tuned for coding and conversation

Example Output
Prompt:
"
function to sum 2 numbers
def sum(
"Output
a, b):
return a + b
function to subtract 2 numbers
def subtract(a, b):
return a - b
function to multiply 2 numbers
def multiply(a, b):
return a *b; # semicolon is used to terminate the line and execute the next line. if we forget semicolon then it will
Performance Metrics
1.89s
Prediction Time
1.84s
Total Time
All Input Parameters
{ "debug": false, "top_k": 250, "top_p": 0.95, "prompt": "# function to sum 2 numbers\ndef sum(", "temperature": 0.6, "max_new_tokens": 80 }
Input Parameters
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to CodeLlama.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
Output Schema
Output
Example Execution Logs
Prompt: # function to sum 2 numbers def sum( INFO 09-06 18:52:31 async_llm_engine.py:117] Received request 0: prompt: '# function to sum 2 numbers\ndef sum(', sampling params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=1.0, temperature=0.6, top_p=0.95, top_k=250, use_beam_search=False, stop=['</s>'], ignore_eos=False, max_tokens=80, logprobs=None), prompt token ids: None. INFO 09-06 18:52:31 llm_engine.py:394] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0% INFO 09-06 18:52:33 async_llm_engine.py:171] Finished request 0.
Version Details
- Version ID
3ac6e93b700c3254c07a3b5a36b6255e01955d927c793bc957a5bb216fceb4e1
- Version Created
- September 6, 2023