How AI reads your scrambled inner thoughts
AI-powered brain–computer interfaces are decoding inner speech and images from neural signals, bringing scientists closer to translating thoughts into words.
How AI reads your scrambled inner thoughts

Advances in artificial intelligence and brain–computer interfaces are allowing scientists to decode inner speech and mental imagery, opening new possibilities for people who cannot communicate through speech.
For decades, the electrical chatter inside the human brain was considered too complex to interpret. Today, artificial intelligence is changing that assumption—bringing researchers closer than ever to translating thoughts into words, images, and even speech.
In a landmark experiment, a 52-year-old woman paralysed by a stroke nearly two decades ago watched sentences appear on a screen as she silently imagined speaking them. Identified only as participant T16, she had a tiny array of electrodes surgically implanted in her brain. A computer system powered by AI decoded the neural signals generated by her thoughts and converted them into text in real time.
The study, conducted at Stanford University, involved four participants, including patients with amyotrophic lateral sclerosis (ALS). It marked the closest scientists have come yet to a functional form of “mind reading.”
From brain signals to words
The breakthrough was announced in August 2025. Soon after, researchers in Japan unveiled a complementary “mind captioning” technique that used non-invasive brain scans and multiple AI models to generate detailed descriptions of what a person was seeing or imagining.
Together, these advances are offering neuroscientists unprecedented insight into the brain’s inner workings—while also promising new ways for people with paralysis or speech impairments to communicate.
“In the next few years, we will begin to see these technologies commercialised and deployed at scale,” says Maitreyee Wairagkar, a neuroengineer working on brain–computer interfaces (BCIs) at University of California, Davis. Companies such as Neuralink are already racing to bring brain-implant technologies out of laboratories and into everyday life.
A long road to decoding thought
BCIs are not new. In the late 1960s, researchers showed that animals could control machines using signals from individual neurons. Over time, BCIs became capable of decoding movement-related brain signals, enabling users to control prosthetic limbs or computer cursors.
Decoding speech, however, proved far more challenging. Early research relied heavily on non-human primates, which made studying language impossible. Progress accelerated only in recent years, as researchers focused on patients with conditions such as ALS or locked-in syndrome.
In 2021, Stanford scientists demonstrated that a paralysed man could generate text by imagining writing letters in the air. In 2024, researchers went further—translating attempted speech directly into text at around 32 words per minute with nearly 98% accuracy.
These systems rely on implanted microelectrode arrays that record neural activity from speech-related brain regions. Machine-learning algorithms then identify patterns linked to phonemes, the basic building blocks of language—much like voice assistants interpret sound, but at the level of neural signals.
Unlocking “inner speech”
A key limitation remained: most systems required users to actively attempt speech, a process that can be tiring and slow. Researchers wanted to know whether they could decode “inner speech”—the words we say silently in our minds.
By asking participants to perform mental tasks such as counting shapes or imagining sentences, scientists found that traces of inner speech could indeed be detected in the motor cortex. In controlled tasks, accuracy reached up to 74% in real time. More open-ended thoughts, however, remain difficult to decode and often produce nonsensical output.
“We’re not able to get someone’s fully unfiltered inner speech perfectly accurately,” says Frank Willett, co-director of Stanford’s Neural Prosthetics Translational Laboratory. “But we can pick up clear traces of it.”
Beyond text: decoding tone and emotion
In 2025, Wairagkar’s team at UC Davis achieved another breakthrough—decoding not just words, but also tone, pitch, rhythm, and intonation. This allowed patients to convey emotion and emphasis, even singing simple melodies through a speech-generating system.
About 60% of the generated words were judged intelligible—still imperfect, but a major step toward naturalistic communication.
Future improvements are likely to come from sampling more neurons and exploring brain regions beyond the motor cortex. Areas involved in auditory processing, for example, may hold further clues to how inner speech is formed.
Reading images from the mind
Parallel research is also decoding visual imagination. Using fMRI brain scans and generative AI models like Stable Diffusion, scientists have begun reconstructing images people see—or even imagine—based solely on brain activity.
A 2023 study in Japan trained AI models on thousands of brain scans recorded while participants viewed images. In many cases, the system produced recognisable approximations of the original pictures, highlighting how generative AI is accelerating progress in neuroscience.
A new frontier of intelligence
Together, these advances suggest a future where thoughts, images, and speech can be translated directly from the brain—reshaping communication, medicine, and human–machine interaction.
While true mind reading remains out of reach, AI is steadily transforming neural noise into meaningful signals, expanding what was once thought impossible.

