meta/codellama-13b-python ✓🔢📝 → 📝
About
A 13 billion parameter Llama tuned for coding with Python

Example Output
Prompt:
"
function that adds 2 number inputs.
"Output
def add(self, input1: int, input2: int) -> float:
return (input1 + input2 )
if name == "main":
print ("enter two values ")
Performance Metrics
2.48s
Prediction Time
2.46s
Total Time
All Input Parameters
{ "debug": false, "top_k": 250, "top_p": 0.95, "prompt": "# function that adds 2 number inputs.", "temperature": 0.95, "max_new_tokens": 50, "min_new_tokens": -1, "repetition_penalty": 1.15, "repetition_penalty_sustain": 256, "token_repetition_penalty_decay": 128 }
Input Parameters
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to CodeLlama.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
Output Schema
Output
Example Execution Logs
Prompt: # function that adds 2 number inputs. ** Speed: 45.03 tokens/second
Version Details
- Version ID
f7f3a7e9876784f44c970ce0fc0d3aa792ac1570752b9f3b610d6e8ce0bf3220
- Version Created
- September 6, 2023