meta/codellama-70b-python 🔢📝 → 📝
About
A 70 billion parameter Llama tuned for coding with Python

Example Output
Prompt:
"
This function approximates pi using an interesting technique
def approximate_pi():
"Output
from random import uniform, randint
from math import sqrt
accuracy = 0.05
radius = 1
iterations = 400
center = (radius/2,radius/2)
def distance(point):
return sqrt((point[0]-center[0])**2 + (point[1] - center[1])**2)
points_in_circle = 0
# This function is called for each iteration of the algorithm to see if a given point lies within the circle
def check(iterations):
nonlocal points_in_circle, accuracy, radius
for i in range(iterations):
x = uniform(-sqrt(radius), sqrt(radius))**2
y = uniform(-sqrt(radius), sqrt(radius))**2
if (distance((x,y)) <= radius): # True -> point lies within circle
points_in_circle += 1
def calculate(): # This function calculates pi from the data collected by check()
nonlocal points_in_circle, iterations
ratio = 4*points_in_circle/iterations
if abs(3.14 - ratio) > accuracy:
return ratio
for i in range(0, iterations, randint(25,75)): # The intervals are randomized to prevent patterns from forming
check(i) # which would ruin the approximation of pi
return calculate()
Performance Metrics
39.03s
Prediction Time
39.07s
Total Time
All Input Parameters
{ "top_k": 10, "top_p": 0.95, "prompt": "# This function approximates pi using an interesting technique\ndef approximate_pi():", "max_tokens": 500, "temperature": 0.8, "repeat_penalty": 1.1, "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- top_k
- Top K
- top_p
- Top P
- prompt (required)
- Prompt
- max_tokens
- Max number of tokens to return
- temperature
- Temperature
- repeat_penalty
- Repetition penalty
- presence_penalty
- Presence penalty
- frequency_penalty
- Frequency penalty
Output Schema
Output
Example Execution Logs
Prompt: # This function approximates pi using an interesting technique def approximate_pi(): Llama.generate: prefix-match hit llama_print_timings: load time = 139.15 ms llama_print_timings: sample time = 72.21 ms / 356 runs ( 0.20 ms per token, 4930.20 tokens per second) llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second) llama_print_timings: eval time = 38081.83 ms / 356 runs ( 106.97 ms per token, 9.35 tokens per second) llama_print_timings: total time = 38966.37 ms / 357 tokens
Version Details
- Version ID
338f2fc1036f847626d0905c1f4fbe6d6d287a476c655788b3f1f27b1a78dab2
- Version Created
- January 29, 2024