"Stanford's Call for AI Transparency: Urging Tech Companies to Reveal More"

Stanford researchers have developed a scoring system called the Foundation Model Transparency Index to rate the transparency of 10 major A.I. language models, including OpenAI's GPT-4, Google's PaLM 2, and Meta's LLaMA 2. The rankings evaluate criteria such as disclosure of training data sources, hardware information, labor involved, and downstream indicators. The most transparent model was LLaMA 2, scoring 54%, followed by GPT-4 and PaLM 2 at 40%. The researchers argue that increased transparency is crucial as A.I. models become more powerful and widespread, enabling regulators, researchers, and users to better understand their capabilities and potential risks.
- Stanford Is Ranking Major A.I. Models on Transparency The New York Times
- OpenAI is Building an AI Image Detector With '99%' Accuracy PetaPixel
- How to supercharge your Google searches with generative AI in Chrome ZDNet
- AI-Generated Case Reports Indistinguishable From Those Written by Humans Medpage Today
- Stanford researchers issue AI transparency report, urge tech companies to reveal more Reuters
Reading Insights
0
0
5 min
vs 6 min read
91%
1,089 → 97 words
Want the full story? Read the original article
Read on The New York Times