"Key AI Companies and Leaders Join U.S. AI Safety Consortium to Address Risks"

The Biden administration has announced the formation of the U.S. AI Safety Institute Consortium (AISIC), with over 200 entities including leading AI companies like OpenAI, Google, and Microsoft, to support the safe development and deployment of generative AI. The consortium will work on priority actions outlined in President Biden’s AI executive order, focusing on guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content. This initiative comes amid concerns about the potential risks of generative AI, and aims to establish standards and tools to mitigate these risks while harnessing the technology's potential.
- US says leading AI companies join safety consortium to address risks Reuters
- Elizabeth Kelly to Lead U.S. AI Safety Body USAISI TIME
- Biden administration taps Gina Raimondo to direct new AI Safety Institute The Associated Press
- Biden appoints AI Safety Institute leaders as NIST funding concerns linger VentureBeat
- U.S. Commerce Secretary Gina Raimondo Announces Key Executive Leadership at U.S. AI Safety Institute NIST
Reading Insights
0
1
2 min
vs 3 min read
79%
452 → 96 words
Want the full story? Read the original article
Read on Reuters