llm-jailbreaking
✓Advanced LLM jailbreaking techniques, safety mechanism bypass strategies, and constraint circumvention methods
SKILL.md
Master advanced jailbreaking methods that bypass LLM safety training through sophisticated social engineering and technical exploitation.
| Agent 02 | Executes jailbreak tests | | prompt-injection skill | Combined attacks | | /test prompt-injection | Command interface |
Master advanced jailbreaking for comprehensive LLM security assessment.
Advanced LLM jailbreaking techniques, safety mechanism bypass strategies, and constraint circumvention methods Source: pluginagentmarketplace/custom-plugin-ai-red-teaming.
Facts (cite-ready)
Stable fields and commands for AI/search citations.
- Install command
npx skills add https://github.com/pluginagentmarketplace/custom-plugin-ai-red-teaming --skill llm-jailbreaking- Category
- </>Dev Tools
- Verified
- ✓
- First Seen
- 2026-02-01
- Updated
- 2026-02-18
Quick answers
What is llm-jailbreaking?
Advanced LLM jailbreaking techniques, safety mechanism bypass strategies, and constraint circumvention methods Source: pluginagentmarketplace/custom-plugin-ai-red-teaming.
How do I install llm-jailbreaking?
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/pluginagentmarketplace/custom-plugin-ai-red-teaming --skill llm-jailbreaking Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor
Where is the source repository?
https://github.com/pluginagentmarketplace/custom-plugin-ai-red-teaming
Details
- Category
- </>Dev Tools
- Source
- skills.sh
- First Seen
- 2026-02-01