Tag

Computational Models

All articles tagged with #computational models

"Universal Brain Structure Laws Span Species, Study Finds"

Originally Published 1 year ago — by SciTechDaily

Featured image for "Universal Brain Structure Laws Span Species, Study Finds"
Source: SciTechDaily

Researchers at Northwestern University have found that the structural features of brains from humans, mice, and fruit flies are near a critical point similar to a phase transition, suggesting a universal principle may govern brain structure. This discovery could lead to new computational models that emulate brain complexity, as the brain's structure appears to be in a delicate balance between two phases, exhibiting fractal-like patterns and other hallmarks of criticality.

Unveiling the Aggregation of Proteins in Parkinson's Disease

Originally Published 1 year ago — by Neuroscience News

Featured image for Unveiling the Aggregation of Proteins in Parkinson's Disease
Source: Neuroscience News

Researchers have used computational models to understand the aggregation of alpha-synuclein protein, a key factor in Parkinson’s disease development. The study reveals that environmental factors such as molecule crowding and ionic changes enhance aggregation through distinct mechanisms. This research not only advances our understanding of neurodegenerative diseases but also offers new avenues for exploring therapeutic interventions.

"Unveiling the Unique Perspective of Deep Neural Networks: A Departure from Human Perception"

Originally Published 2 years ago — by SciTechDaily

Featured image for "Unveiling the Unique Perspective of Deep Neural Networks: A Departure from Human Perception"
Source: SciTechDaily

MIT neuroscientists have discovered that deep neural networks, while proficient at recognizing various images and sounds, often misidentify nonsensical stimuli as familiar objects or words, indicating that these models develop unique and idiosyncratic "invariances" unlike human perception. The study also found that adversarial training could slightly improve the models' recognition patterns, suggesting a new approach to evaluating and enhancing computational models of sensory perception. These findings provide insights into the differences between human and computational sensory systems and offer a new tool for evaluating the accuracy of computational models in mimicking human perception.

"Brain's Learning Process Mirrors Computational Models"

Originally Published 2 years ago — by MIT News

Featured image for "Brain's Learning Process Mirrors Computational Models"
Source: MIT News

Two studies from researchers at MIT's K. Lisa Yang Integrative Computational Neuroscience Center suggest that the brain may develop an intuitive understanding of the physical world through a process similar to self-supervised learning used in computational models. The studies found that neural networks trained using self-supervised learning generated activity patterns similar to those seen in the brains of animals performing the same tasks. The findings indicate that these models can learn representations of the physical world to make accurate predictions, suggesting that the mammalian brain may use a similar strategy. The research has implications for understanding the brain and developing artificial intelligence systems that emulate natural intelligence.

Understanding Baby Talk: Decoding Early Linguistic Efforts

Originally Published 2 years ago — by Neuroscience News

Featured image for Understanding Baby Talk: Decoding Early Linguistic Efforts
Source: Neuroscience News

Researchers have conducted a study to understand how adults make sense of the limited vocabulary of young children. By analyzing thousands of hours of transcribed audio, computational models were created to decode adult interpretations of baby talk. The most successful models relied on context from previous conversations and knowledge of common mispronunciations. This context-based interpretation by adults may provide valuable feedback, aiding babies in language acquisition. The findings suggest that adults' understanding of children's speech could facilitate more effective language learning in young children.

Unveiling the Disparity in Perception: Neural Networks vs. Human Sensory Recognition

Originally Published 2 years ago — by Neuroscience News

Featured image for Unveiling the Disparity in Perception: Neural Networks vs. Human Sensory Recognition
Source: Neuroscience News

A study conducted by MIT neuroscientists has found that deep neural networks, while capable of identifying objects similar to human sensory systems, often produce unrecognizable or distorted images and sounds when prompted to generate stimuli similar to a given input. This suggests that neural networks develop their own unique invariances, diverging from human perceptual patterns. The researchers propose using adversarial training to make the models' generated stimuli more recognizable to humans, providing insights into evaluating models that mimic human sensory perceptions.

Predicting Natural Sound Processing through Brain Activity Analysis.

Originally Published 2 years ago — by Medical Xpress

Featured image for Predicting Natural Sound Processing through Brain Activity Analysis.
Source: Medical Xpress

Researchers at CNRS and Université Aix-Marseille and Maastricht University have used computational models to predict how the human brain transforms sounds into semantic representations of what is happening in the surrounding environment. The team assessed three classes of computational models, namely acoustic, semantic and sound-to-event DNNs, and found that DNN-based models greatly surpassed both computational approaches based on acoustics and techniques that characterize cerebral responses to sounds by placing them in different categories. The researchers also hypothesized that the human brain makes sense of natural sounds similarly to how it processes words.