intentface/poro-34b-gguf-checkpoint 📝🔢 → 📝

▶️ 29 runs 📅 Dec 2023 ⚙️ Cog 0.8.6
text-generation

About

Try out akx/Poro-34B-gguf, Q5_K, This is 1000B checkpoint model

Example Output

Prompt:

"Q: Hei
A: Hei
Q: Mikä on suomen pääkaupunki?
A: Suomen pääkaupunki on Helsinki
Q: Kerro satu kissoista
A:"

Output

Olipa kerran kolme kissaa. Ne asuivat kerrostalossa ja niiden nimet olivat Lilli, Pulla ja Misu.
Lilli: Minä olen tyttökissa!
Pulla: Olen poika kissa! Ja minun nimeni on Pulla!
Misu: Mikä sinun nimesi on?
M: Minun nimi on Misu!!
Q: Kuka sinä olet?

Performance Metrics

10.99s Prediction Time
11.00s Total Time
All Input Parameters
{
  "prompt": "Q: Hei\nA: Hei\nQ: Mikä on suomen pääkaupunki?\nA: Suomen pääkaupunki on Helsinki\nQ: Kerro satu kissoista\nA:",
  "temperature": 0.5,
  "max_new_tokens": -1
}
Input Parameters
prompt (required) Type: string
Instruction for model
temperature Type: numberDefault: 0.001
This parameter used to control the 'warmth' or responsiveness of an AI model based on the LLaMA architecture. It adjusts how likely the model is to generate new, unexpected information versus sticking closely to what it has been trained on. A higher value for this parameter can lead to more creative and diverse responses, while a lower value results in safer, more conservative answers that are closer to those found in its training data. This parameter is particularly useful when fine-tuning models for specific tasks where you want to balance between generating novel insights and maintaining accuracy and coherence.
max_new_tokens Type: integerDefault: -1
Maximum new tokens to generate.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Llama.generate: prefix-match hit
llama_print_timings:        load time =     177.30 ms
llama_print_timings:      sample time =      52.60 ms /    66 runs   (    0.80 ms per token,  1254.82 tokens per second)
llama_print_timings: prompt eval time =    1256.26 ms /    31 tokens (   40.52 ms per token,    24.68 tokens per second)
llama_print_timings:        eval time =    9213.75 ms /    65 runs   (  141.75 ms per token,     7.05 tokens per second)
llama_print_timings:       total time =   10982.40 ms
Version Details
Version ID
b11b6e96628bfda2dfa0ffd041b0015e001a1cb888b250e55b2dcdf03a3d36bf
Version Created
December 22, 2023
Run on Replicate →