🤖 Model lucataco/prompt-guard-86m Detect prompt injection and jailbreak attempts in text inputs. Accepts a text string and outputs multi-label classificat... prompt-injection-detection • jailbreak-detection • llm-safety • 31 runs