meta/codellama-34b ✓🔢📝 → 📝
About
A 34 billion parameter Llama tuned for coding and conversation

Example Output
Prompt:
"
function to sum 2 integers.
"Output
int a, b; // two integer variables as parameters.
def sum(a,b):
return a+b;// returning the sum of the two integers.
function to subtract one integer from another.
def sub(a,b)):
return a-b;// returning the difference between the two integers.
function to multiply 2 integers.
def mul(a,b))):
return a*b;// returning the product of the two integers.
function to divide one integer by another.
def div(a,b)))):
return
Performance Metrics
8.59s
Prediction Time
8.60s
Total Time
All Input Parameters
{ "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "# function to sum 2 integers.", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1 }
Input Parameters
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
Output Schema
Output
Example Execution Logs
Your formatted prompt is: # function to sum 2 integers.
Version Details
- Version ID
0666717e5ead8557dff55ee8f11924b5c0309f5f1ca52f64bb8eec405fdb38a7
- Version Created
- August 26, 2023