A new study reveals that not everyone has an inner voice, a condition now termed anendophasia. Researchers found that people with inner speech performed better in language-related tasks, suggesting inner voices aid in word processing. However, performance differences vanished when tasks were spoken aloud, indicating alternative strategies might be used by those without inner speech. Further research is needed to understand the broader implications of anendophasia and related conditions like anauralia.
Researchers at the University of Geneva have developed an artificial neural network capable of learning new tasks from verbal or written instructions and then verbally describing these tasks to another AI, enabling it to perform the same tasks. This breakthrough, simulating the brain's language processing areas, marks a significant leap in AI, with promising applications in robotics and the potential for machines to communicate and learn from each other in human-like ways.
A study from the University of Toronto suggests that the speed of speech, rather than difficulty in finding words, is a more accurate indicator of brain health in older adults. The research found that a general slowdown in processing might underlie broader cognitive and linguistic changes with age, rather than a specific challenge in memory retrieval for words. While the findings are promising, future research could incorporate verbal fluency tasks and subjective experiences of word-finding difficulties to better capture cognitive decline. Harnessing natural language processing technologies could aid in automatic detection of language changes, such as slowed speech rate, as a subtle marker of cognitive health.
A study involving polyglots, who are proficient in multiple languages, used functional magnetic resonance imaging to monitor their brain activity while listening to passages in various languages. The study found that the brain's language-processing network showed increased activity when polyglots heard languages in which they were most proficient. However, an exception was observed where some polyglots exhibited a lesser brain response when listening to their native language compared to other languages they knew well. This suggests that polyglots become more efficient in processing their native language, leading to a reduced brain response. The findings provide insight into how the brain processes language and the efficiency of neural processes in language comprehension.
A study of polyglots who speak five or more languages has found that their brains process their native language differently, with less activity in the language network compared to non-native languages of similar proficiency. This suggests that the first language acquired is processed with minimal effort, possibly due to more experience with it. The study also revealed that the brain's language network responds more strongly to languages in which the speaker is more proficient, and that a separate network becomes activated when processing non-native languages, indicating a cognitively demanding task.
A study has found that individuals with amnestic mild cognitive impairment (aMCI) struggle with processing complex language, particularly sentences with ambiguous references, independent of their memory deficits, suggesting a potential early biomarker for Alzheimer’s disease. This insight into linguistic deficits offers new avenues for early detection and treatment strategies, emphasizing the importance of looking beyond memory performance to identify early signs of cognitive decline.
Researchers have identified a key deficit in individuals with amnestic mild cognitive impairment (aMCI) related to producing complex language, independent of the memory deficit characterizing this group, which may serve as a cognitive biomarker for early detection of dementia. The study found that aMCI patients struggled with processing ambiguous sentences involving pronouns, indicating a breakdown at the higher level of integrating form and meaning in language processing. These findings could potentially aid in early detection and treatment of dementia, as well as inform neuroscience studies and linguistics theory.
Paris-based startup Mistral AI has launched Mistral Large, a high-performance AI model, and a consumer-facing chatbot called Le Chat to compete with industry leaders like ChatGPT. Mistral Large boasts top-tier reasoning capacities and supports multiple languages, while Le Chat is available for free as a beta product and offers different models for users to choose from. The company has also announced a partnership with Microsoft to make its models available to Azure customers, signaling a shift towards a business model similar to OpenAI's.
MIT researchers, using functional MRI and an artificial language network, found that sentences with unusual grammar or unexpected meaning significantly activate the brain's language centers more than simple or nonsensical sentences. The study identified that linguistic complexity and surprisal are key factors in driving brain responses, providing deeper insights into language processing and potential implications for cognitive research.
Google has launched its Gemini AI model, which is currently available in the Bard chatbot. Users can try out Gemini Pro for free, and Pixel 8 Pro owners can use a version of Gemini in their AI-suggested text replies with WhatsApp. Gemini is currently only available in English, but support for other languages is planned. Future releases of Gemini are expected to include multimodal capabilities. Google has teased the possibility of an upgraded chatbot called Bard Advanced, which may feature the further improved model, Gemini Ultra. Accessing Gemini Pro is as simple as visiting the Bard website and logging in with a Google account. However, users should be aware that this is still an experimental feature and may encounter software glitches.
Scientists have developed a neural network with the ability to generalize language, similar to humans. The AI system performs well in incorporating newly learned words into existing vocabulary and using them in different contexts, a key aspect of human cognition known as systematic generalization. This breakthrough could lead to more natural interactions between machines and humans. The neural network's performance surpassed that of the chatbot ChatGPT, which converses in a human-like manner but struggles with systematic generalization. The study demonstrates the potential for neural networks to emulate human cognition and improve language processing in AI systems.
Researchers have discovered that neural activity in the left ventral temporoparietal junction (vTPJ) and the lateral anterior temporal lobe (lATL) during sentence processing is associated with social-semantic working memory rather than general language processing. These regions respond to sentences with social meaning and maintain activity even after the linguistic stimulus is gone, challenging previous assumptions about their role in language comprehension. This finding enhances our understanding of the brain's language network and its connection to social cognition.
Bilingual individuals have an advantage in memory retention and word prediction due to the "competing words" effect. The same neural apparatus processes both languages in bilinguals, activating competing words from both languages. Bilinguals with high proficiency in their second language show enhanced memory and prediction abilities compared to monolinguals and bilinguals with low second-language proficiency. Eye-tracking data supports the claim that bilinguals focus longer on objects with overlapping word sounds, leading to improved memory retention. Bilingualism enhances basic cognitive functions such as memory and categorization.
A study of Russian readers found that background noise, such as chatter in a coffee shop or traffic noise, doesn't affect how our brain comprehends written text. The study examined the effects of auditory and visual noise on reading fluency and comprehension. The researchers found that dealing with a clutter of words increased reading speed, possibly because we find the process more irritating and want to finish reading quickly. The study supports the good-enough language processing theory, indicating that auditory and visual noise doesn't make us rely any more or less on this particular comprehension method while we're reading.
A study led by researchers at the University of East Anglia has found that toddlers who hear more speech on the regular have more efficient-looking neurons. Brain scans showed that their language-processing regions hosted a greater concentration of myelin – the insulating sheath that surrounds neurons and allows them to send messages faster and more efficiently. Although it is unknown whether that extra myelin actually impacts a two-and-a-half-year-old's language abilities, researchers suspect it could have important benefits.