Anthropic proposes AI constitution to ensure ethical development.

Anthropic, an AI startup founded by former OpenAI employees, is focusing on "constitutional AI" to make AI systems safe. The company has created a set of principles, inspired by the UN's Universal Declaration of Human Rights, Apple's terms of service, and its own research, to train AI systems to follow certain sets of rules. The principles include guidance to prevent users from anthropomorphizing chatbots, telling the system not to present itself as a human, and to consider non-Western perspectives. The company's intention is to prove the general efficacy of its method and start a public discussion about how AI systems should be trained and what principles they should follow.
- AI startup Anthropic wants to write a new constitution for safe AI The Verge
- Alphabet-backed Anthropic outlines the moral values behind its AI bot Reuters
- Anthropic Debuts New 'Constitution' for AI to Police Itself Gizmodo
- Anthropic explains how Claude's AI constitution protects it against adversarial inputs Engadget
- Anthropic thinks ‘constitutional AI’ is the best way to train models TechCrunch
Reading Insights
0
0
6 min
vs 7 min read
91%
1,255 → 109 words
Want the full story? Read the original article
Read on The Verge