Anthropic's AI Constitution: A Radical Plan for Safe and Ethical AI.

TL;DR Summary
Anthropic, an AI startup backed by Alphabet, has disclosed the set of moral values that it used to train and make safe its AI chatbot, Claude. The moral values guidelines, which Anthropic calls Claude's constitution, draw from several sources, including the United Nations Declaration on Human Rights and Apple's data privacy rules. Anthropic takes a different approach, giving its Open AI competitor Claude a set of written moral values to read and learn from as it makes decisions on how to respond to questions.
- Alphabet-backed Anthropic outlines the moral values behind its AI bot Reuters
- AI gains “values” with Anthropic’s new Constitutional AI chatbot approach Ars Technica
- AI startup Anthropic wants to write a new constitution for safe AI The Verge
- A Radical Plan to Make AI Good, Not Evil WIRED
- Anthropic explains how Claude's AI constitution protects it against adversarial inputs Engadget
Reading Insights
Total Reads
0
Unique Readers
0
Time Saved
1 min
vs 2 min read
Condensed
77%
370 → 84 words
Want the full story? Read the original article
Read on Reuters