nateraw/codellama-34b 🔢📝 → 📝

▶️ 21 runs 📅 Sep 2023 ⚙️ Cog 0.8.6
code-completion code-generation text-generation

Example Output

Output

if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n - 1) + fibonacci(n - 2)

def main():
num = int(input('Digite a quantidade de termos que deseja: '))
print(fibonacci(num))

main()

Performance Metrics

14.16s Prediction Time
342.67s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "message": "def fibonacci(n):\n",
  "temperature": 0.8,
  "max_new_tokens": 200
}
Input Parameters
top_k Type: integerDefault: 50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
top_p Type: numberDefault: 0.9
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
message (required) Type: string
temperature Type: numberDefault: 0.2
The value used to modulate the next token probabilities.
max_new_tokens Type: integerDefault: 256
The maximum number of tokens the model should generate as output.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Version Details
Version ID
868672ce4d78e0a74c15e678a9ace4cda3a36da20725f83a3b44aaa24076f80b
Version Created
September 28, 2023
Run on Replicate →