Google AI Researchers Challenge the Future of Artificial Intelligence
Originally Published 2 years ago — by Futurism

Google DeepMind researchers have published a paper highlighting the limitations of AI models, specifically transformer models like OpenAI's GPT-2. The study reveals that these models struggle to generate outputs beyond their training data, hindering their ability to perform tasks outside their domain. Despite the massive training datasets used to build these models, they still lack generalization and are only proficient in areas they have been extensively trained on. The findings challenge the hype surrounding AI and caution against presumptions of artificial general intelligence (AGI). The research contradicts the optimistic views of CEOs like OpenAI's Sam Altman and Microsoft's Satya Nadella, who plan to "build AGI together."