Researchers at Northwestern University have found that the structural features of brains from humans, mice, and fruit flies are near a critical point similar to a phase transition, suggesting a universal principle may govern brain structure. This discovery could lead to new computational models that emulate brain complexity, as the brain's structure appears to be in a delicate balance between two phases, exhibiting fractal-like patterns and other hallmarks of criticality.
Researchers have used computational models to understand the aggregation of alpha-synuclein protein, a key factor in Parkinson’s disease development. The study reveals that environmental factors such as molecule crowding and ionic changes enhance aggregation through distinct mechanisms. This research not only advances our understanding of neurodegenerative diseases but also offers new avenues for exploring therapeutic interventions.
MIT neuroscientists have discovered that deep neural networks, while proficient at recognizing various images and sounds, often misidentify nonsensical stimuli as familiar objects or words, indicating that these models develop unique and idiosyncratic "invariances" unlike human perception. The study also found that adversarial training could slightly improve the models' recognition patterns, suggesting a new approach to evaluating and enhancing computational models of sensory perception. These findings provide insights into the differences between human and computational sensory systems and offer a new tool for evaluating the accuracy of computational models in mimicking human perception.
Two studies from researchers at MIT's K. Lisa Yang Integrative Computational Neuroscience Center suggest that the brain may develop an intuitive understanding of the physical world through a process similar to self-supervised learning used in computational models. The studies found that neural networks trained using self-supervised learning generated activity patterns similar to those seen in the brains of animals performing the same tasks. The findings indicate that these models can learn representations of the physical world to make accurate predictions, suggesting that the mammalian brain may use a similar strategy. The research has implications for understanding the brain and developing artificial intelligence systems that emulate natural intelligence.
Researchers have conducted a study to understand how adults make sense of the limited vocabulary of young children. By analyzing thousands of hours of transcribed audio, computational models were created to decode adult interpretations of baby talk. The most successful models relied on context from previous conversations and knowledge of common mispronunciations. This context-based interpretation by adults may provide valuable feedback, aiding babies in language acquisition. The findings suggest that adults' understanding of children's speech could facilitate more effective language learning in young children.
A study conducted by MIT neuroscientists has found that deep neural networks, while capable of identifying objects similar to human sensory systems, often produce unrecognizable or distorted images and sounds when prompted to generate stimuli similar to a given input. This suggests that neural networks develop their own unique invariances, diverging from human perceptual patterns. The researchers propose using adversarial training to make the models' generated stimuli more recognizable to humans, providing insights into evaluating models that mimic human sensory perceptions.
Researchers at CNRS and Université Aix-Marseille and Maastricht University have used computational models to predict how the human brain transforms sounds into semantic representations of what is happening in the surrounding environment. The team assessed three classes of computational models, namely acoustic, semantic and sound-to-event DNNs, and found that DNN-based models greatly surpassed both computational approaches based on acoustics and techniques that characterize cerebral responses to sounds by placing them in different categories. The researchers also hypothesized that the human brain makes sense of natural sounds similarly to how it processes words.