Earlier this 12 months, Neuralink implanted a chip within the brain of 29-year-old American Noland Arbaugh, who’s paralyzed from the shoulders down. The chip allowed Arbaugh to maneuver a mouse cursor on a screen just by imagining it moving.

In May 2023 also US researchers announced a non-invasive strategy to “decode” the words someone is pondering using brain scans combined with generative AI. An identical project emerged Headlines a few “mind-reading AI hat.”

Can neural implants and generative AI really “read minds”? Will the day come when computers can spit out accurate, real-time transcripts of our thoughts for everybody to read?

Such technology might need some advantages — particularly for advertisers in search of latest sources of customer targeting data — however it would destroy the last bastion of privacy: the privacy of our own minds. However, before we panic, we must always ask ourselves: Can neural implants and generative AI “read minds”?

The brain and the mind

As far as we all know, conscious experience arises from the activity of the brain. This implies that every conscious mental state must have what philosophers and cognitive scientists call a “neural correlate”: a selected pattern of nerve cells (neurons) firing within the brain.

For every conscious mental state you will discover yourself in—whether you are eager about the Roman Empire or imagining a cursor moving—there is a corresponding pattern of activity in your brain.

So if a tool can track our brain states, it should simply give you the option to read our minds. Right?

For real-time AI-powered mind reading to be possible, we’d like to give you the option to detect precise, one-to-one correspondences between specific conscious mental states and brain states. And that might not be possible.

Tough matches

To read a mind based on brain activity, one must know exactly which brain states correspond to specific mental states. This means, for instance, that one must distinguish the brain states that correspond to seeing a red rose from those who correspond to smelling a red rose, touching a red rose, imagining a red rose, or pondering that red roses are to you’re mother’s favorite.

One must also distinguish all of those brain states from the brain states corresponding to seeing, smelling, touching, imagining, or eager about something else, comparable to a ripe lemon. And so on, for every little thing else you’ll be able to perceive, imagine, or take into consideration.

To say that is difficult can be an understatement.

Take facial perception for example. The conscious perception of a face comprises all sorts of neural activity.

But much of this activity appears to relate to processes that occur before or after conscious perception of the face—things like working memory, selective attention, self-monitoring, task planning, and reporting.

Filtering out the neural processes which are exclusively and specifically chargeable for the conscious perception of a face is a Herculean task, and one which current neuroscience is much from solving.

Even if this task were solved, neuroscientists would still have only found the neural correlates of a selected form of conscious experience: namely, the final experience of a face. As a result, they might not have found the neural correlates of the experiences of certain faces.

So even when there have been amazing advances in neuroscience, the would-be mind reader would not necessarily give you the option to inform from a brain scan whether he was seeing Barack Obama, his mother, or a face he didn’t recognize.

As far as mind reading goes, that would not be much to write down home about.

But what about AI?

But don’t recent headlines about neural implants and AI show that some mental states could be read, comparable to the concept of ​​cursors moving and interesting in inner speech?

Not necessarily. Take the nerve implants first.

Neural implants are typically designed to assist a patient perform a selected task: for instance, moving a cursor on a screen. To do that, they don’t must give you the option to accurately discover the neural processes that correlate with the intention to maneuver the cursor. You just must get a rough idea of ​​the neural processes that accompany these intentions, a few of which may very well underlie other related mental actions comparable to task planning, memory, etc.

Although the success of nerve implants is actually impressive – and future implants will likely be even larger more detailed information about brain activity – it doesn’t show that precise one-to-one mappings have been identified between specific mental states and specific brain states. So it doesn’t make real mind reading any more likely.

It might not be possible to perfectly map brain states to mental states.
Maxim Gaigul / Shutterstock

Now let’s take the “decoding” of inner speech by a system consisting of a non-invasive brain scan and generative AI, as reported in this study. This system was designed to “decode” the content of continuous narratives from brain scans as participants either listened to podcasts, recited stories of their heads, or watched movies. The system is not very accurate – but still, the indisputable fact that it was in a position to predict this mental content higher than probability is actually impressive.

So we could say the system could predict continuous narratives from brain scans with absolute accuracy. Like the neural implant, the system would only be optimized for this task: it will not be effective at tracking other mental activities.

How much mental activity could this technique monitor? That depends: What a part of our mental life consists of imagining, perceiving, or otherwise eager about continuous, well-formed narratives that could be expressed in easy language?

Not much.

Our mental life is a flickering, lightning-fast, multi-layered event during which perceptions, memories, expectations and concepts are involved all of sudden in real time. It’s hard to assume how a transcript created using even the best brain scanner coupled with the neatest AI could faithfully capture all of this.

The way forward for mind reading

In recent years, AI development has shown a bent to beat seemingly insurmountable hurdles. Therefore, it’s unwise to completely rule out the potential for AI-powered mind reading.

But given the complexity of our mental lives and the limited knowledge concerning the brain – in any case, neuroscience remains to be in its infancy – firm predictions about AI-powered mind reading needs to be treated with caution.

This article was originally published at theconversation.com