🤖 Model lucataco/prompt-guard-86m Detect prompt injection and jailbreak attempts in text inputs. Classify a string as benign, injection, and/or jailbreak... text-classification • prompt-injection-detection • llm-safety • 31 runs