tomasmcm/claude2-alpaca-13b 📝🔢 → 📝
About
Source: umd-zhou-lab/claude2-alpaca-13B ✦ Quant: TheBloke/claude2-alpaca-13B-AWQ ✦ This model is trained by fine-tuning llama-2 with claude2 alpaca data
Example Output
Prompt:
"
Below is an instruction that describes a task. Write a response that appropriately completes the request.
Instruction:
Generate a haiku about AI.
Response:
"Output
Artificial intelligence
Is a powerful technology
That can help us.
Is a powerful technology
That can help us.
Performance Metrics
0.46s
Prediction Time
0.49s
Total Time
All Input Parameters
{
"stop": "###",
"top_k": -1,
"top_p": 0.95,
"prompt": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nGenerate a haiku about AI.\n\n### Response:\n",
"max_tokens": 128,
"temperature": 0.8,
"presence_penalty": 0,
"frequency_penalty": 0
}
Input Parameters
- stop
- List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
- top_k
- Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
- top_p
- Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
- prompt (required)
- Text prompt to send to the model.
- max_tokens
- Maximum number of tokens to generate per output sequence.
- temperature
- Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
- presence_penalty
- Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
- frequency_penalty
- Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema
Output
Example Execution Logs
Processed prompts: 0%| | 0/1 [00:00<?, ?it/s] Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 2.23it/s] Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 2.23it/s] Generated 16 tokens in 0.44997215270996094 seconds.
Version Details
- Version ID
49b2d4cc082625d10463f56426bd012ebd75c11e3d6c2b54f7532c6ddd46b944- Version Created
- December 10, 2023