meta/codellama-13b-python ✓🔢📝 → 📝

▶️ 3.9K runs 📅 Aug 2023 ⚙️ Cog 0.8.6 🔗 GitHub 📄 Paper ⚖️ License
code-generation python text-generation

About

A 13 billion parameter Llama tuned for coding with Python

Example Output

Prompt:

"

function that adds 2 number inputs.

"

Output

def add(self, input1: int, input2: int) -> float:
return (input1 + input2 )

if name == "main":
print ("enter two values ")

Performance Metrics

2.48s Prediction Time
2.46s Total Time
All Input Parameters
{
  "debug": false,
  "top_k": 250,
  "top_p": 0.95,
  "prompt": "# function that adds 2 number inputs.",
  "temperature": 0.95,
  "max_new_tokens": 50,
  "min_new_tokens": -1,
  "repetition_penalty": 1.15,
  "repetition_penalty_sustain": 256,
  "token_repetition_penalty_decay": 128
}
Input Parameters
debug Type: booleanDefault: false
provide debugging output in logs
top_k Type: integerDefault: 50Range: 0 - ∞
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
top_p Type: numberDefault: 0.9Range: 0 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to CodeLlama.
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
max_new_tokens Type: integerDefault: 128Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
stop_sequences Type: string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Prompt:
# function that adds 2 number inputs.
** Speed: 45.03 tokens/second
Version Details
Version ID
f7f3a7e9876784f44c970ce0fc0d3aa792ac1570752b9f3b610d6e8ce0bf3220
Version Created
September 6, 2023
Run on Replicate →