
Microsoft's AI initiatives prioritize safety and accountability.
Microsoft has launched Azure AI Content Safety, an AI-powered moderation service that can detect inappropriate content across images and text in multiple languages. The service assigns a severity score to flagged content, indicating to moderators what content requires action. Azure AI Content Safety is integrated into Azure OpenAI Service and can be applied to non-AI systems, such as online communities and gaming platforms. Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.