Microsoft's AI initiatives prioritize safety and accountability.

TL;DR Summary
Microsoft has launched Azure AI Content Safety, an AI-powered moderation service that can detect inappropriate content across images and text in multiple languages. The service assigns a severity score to flagged content, indicating to moderators what content requires action. Azure AI Content Safety is integrated into Azure OpenAI Service and can be applied to non-AI systems, such as online communities and gaming platforms. Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.
Topics:technology#ai#ai-moderation#azure-ai-content-safety#microsoft#online-safety#toxicity-detection
- Microsoft launches new AI tool to moderate text and images TechCrunch
- Microsoft makes a push for AI responsibility and safety through Azure ZDNet
- AI Activists to Target Microsoft at Build 2023 Thurrott.com
- Microsoft pledges to watermark AI-generated images and videos TechCrunch
- Microsoft will ID its AI art with a hidden watermark PCWorld
- View Full Coverage on Google News
Reading Insights
Total Reads
0
Unique Readers
5
Time Saved
4 min
vs 5 min read
Condensed
92%
922 → 76 words
Want the full story? Read the original article
Read on TechCrunch