A new brain decoding technique called mind captioning can generate accurate, structured text descriptions of what a person sees or recalls by translating semantic brain activity into language without relying on traditional language areas, opening new possibilities for nonverbal communication and understanding mental content.
A new non-invasive AI technique called 'mind captioning' can translate brain activity into detailed descriptive sentences, revealing how the brain interprets visual information and potentially aiding those with language impairments.
Japanese researchers have developed a "brain decoding technology" that uses artificial intelligence (AI) to translate human brain activity into mental images. In a groundbreaking study, the researchers successfully extracted and visualized mental images of objects and landscapes, including a leopard and an airplane. This technology has potential applications in medicine and welfare, such as creating new communication devices and understanding how hallucinations and dreams work in the brain.
Japanese researchers have developed a "brain decoding" technology that uses AI to translate human brain activity into visible mental images of objects and landscapes, such as a leopard and an airplane. While previous research has recreated images based on brain activity, this new approach aims to make these mental images visible to others.
Neuroscientists are developing more naturalistic experiments to study animal and human behavior, aiming to gain a more holistic understanding of the brain. These experiments differ from traditional laboratory setups and focus on behaviors such as escaping predators or finding food. By studying these natural actions, scientists hope to decode the brain in a way that is more relevant to everyday activities.
Researchers at Columbia University have discovered that the brain encodes phonetic information differently in noisy environments depending on the volume of the speech and our level of attention to it. The study used neural recordings and computer models to demonstrate that "glimpsed" and "masked" phonetic information are encoded separately in our brain. This discovery could lead to significant advancements in hearing aid technology, specifically in improving auditory attention-decoding systems for brain-controlled hearing aids.