Analyze text and images for harmful content with customizable blocklists.
Important: This is a REST client. ContentSafetyClient is a function, not a class.
| Hate and Fairness | Hate | Discriminatory language targeting identity groups | | Sexual | Sexual | Sexual content, nudity, pornography | | Violence | Violence | Physical harm, weapons, terrorism | | Self-Harm | SelfHarm | Self-injury, suicide, eating disorders |
Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detecting hate speech, violence, sexual content, or self-harm, or managing custom blocklists. Source: sickn33/antigravity-awesome-skills.