Scientists have developed a brain implant that can decode inner speech into text or sound with up to 74% accuracy, offering promising advancements for individuals with speech or motor impairments, though further improvements and safeguards are needed.
A study published in Cell reveals that brain implants can decode not only attempted speech but also imagined inner speech, raising privacy concerns about mental privacy as technology advances. Researchers found that AI can translate faint brain signals associated with inner speech into words, which could make communication easier for paralyzed individuals but also pose risks of unintentional mind-reading. Protective measures like wake words were tested, but experts warn that the boundary between private and public thoughts may become blurred, especially with future consumer devices.
Researchers at Stanford have developed a brain-computer interface that can decode inner speech from neural activity, raising both exciting possibilities for communication for those with paralysis and significant privacy concerns about mind reading without consent. The system can interpret imagined words with over 70% accuracy, but also risks unintended thought leaks, prompting calls for safeguards and regulation to protect mental privacy as the technology advances.