The Future of AI: Multimodal Language Models and Understanding.

1 min read
Source: Nautilus Magazine
The Future of AI: Multimodal Language Models and Understanding.
Photo: Nautilus Magazine
TL;DR Summary

AI researchers are divided on whether large pretrained language models like GPT-4 can understand language. While they can coordinate on a shared meaning, they likely don't have generative or constructive understanding. These language models are trained on a huge but restricted domain, unlike human understanding, which maps textual experience onto somatosensory experience. Humans apply mechanical reasoning to things, while GPT-4 doesn't have a physics model. The narrative that these language models have rediscovered human reasoning is misguided and demonstrably false.

Share this article

Reading Insights

Total Reads

0

Unique Readers

1

Time Saved

4 min

vs 5 min read

Condensed

91%

93480 words

Want the full story? Read the original article

Read on Nautilus Magazine