fofr/prompt-classifier 🔢✓📝 → 📝
About
Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)

Example Output
Prompt:
"[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]"
Output
0
Performance Metrics
0.41s
Prediction Time
0.37s
Total Time
All Input Parameters
{ "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" }
Input Parameters
- seed
- Random seed. Leave blank to randomize the seed
- debug
- provide debugging output in logs
- top_k
- When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to the model.
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- max_new_tokens
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- min_new_tokens
- Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- stop_sequences
- A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
- replicate_weights
- Path to fine-tuned weights produced by a Replicate fine-tune job.
Output Schema
Output
Example Execution Logs
Your formatted prompt is: [PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING] correct lora is already loaded Overall initialize_peft took 0.0000 hostname: model-hs-f65583d2-ad5558a4ca97250e-gpu-a40-594759b748-kqvz9
Version Details
- Version ID
1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f
- Version Created
- September 12, 2023