fofr/llama2-prompter 🔢✓📝 → 📝
About
Llama2 13b base model fine-tuned on text to image prompts

Example Output
Prompt:
"[PROMPT] a spooky ghost, in the style of"
Output
[PROMPT] a spooky ghost, in the style of realistic illustrations, fantasy art, cg, shiny, cobwebs, 32k uhd, dark gray and amber
Performance Metrics
3.81s
Prediction Time
3.76s
Total Time
All Input Parameters
{ "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a spooky ghost, in the style of", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": 10, "stop_sequences": "[/PROMPT]" }
Input Parameters
- seed
- Random seed. Leave blank to randomize the seed
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
- replicate_weights
- Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema
Output
Example Execution Logs
Your formatted prompt is: [PROMPT] a spooky ghost, in the style of
Version Details
- Version ID
818f7a50069e243f1fa6a7881e1710aea3aa3f0132f3f37e6dab69a4b59fcef0
- Version Created
- September 1, 2023