Google AI Researchers Challenge the Future of Artificial Intelligence

Google DeepMind researchers have published a paper highlighting the limitations of AI models, specifically transformer models like OpenAI's GPT-2. The study reveals that these models struggle to generate outputs beyond their training data, hindering their ability to perform tasks outside their domain. Despite the massive training datasets used to build these models, they still lack generalization and are only proficient in areas they have been extensively trained on. The findings challenge the hype surrounding AI and caution against presumptions of artificial general intelligence (AGI). The research contradicts the optimistic views of CEOs like OpenAI's Sam Altman and Microsoft's Satya Nadella, who plan to "build AGI together."
- Google AI Researchers Found Something Their Bosses Might Not Be Happy About Futurism
- Google’s new paper claims AI has its own limitations Gizchina.com
- Google researchers may have just turned the race to AGI upside down with a single paper Business Insider
- Google researchers deal a major blow to the theory AI is about to outsmart humans Business Insider India
- View Full Coverage on Google News
Reading Insights
0
1
3 min
vs 4 min read
86%
742 → 106 words
Want the full story? Read the original article
Read on Futurism