Build content moderation applications using the Azure AI Content Safety SDK for Java.
| Hate | Discriminatory language based on identity groups | | Sexual | Sexual content, relationships, acts | | Violence | Physical harm, weapons, injury | | Self-harm | Self-injury, suicide-related content |
Build content moderation applications with Azure AI Content Safety SDK for Java. Use when implementing text/image analysis, blocklist management, or harm detection for hate, violence, sexual content, and self-harm. Source: sickn33/antigravity-awesome-skills.
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill azure-ai-contentsafety-java Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw