José del R. Millán is a leading global innovator in the field of brain–machine interfaces (BMI), particularly those based on electroencephalogram (EEG) signals. A professor at the University of Texas at Austin and a recipient of several prestigious awards, Millán’s work has focused on the design of brain-controlled robots.
In addition to advancing the fundamentals of BMI and the development of neuroprosthetics, he is prioritising the translation of these technologies for people living with motor and cognitive disabilities. At the same time, he is exploring how BMI systems can offer new interaction modalities for able-bodied individuals, enhancing human capabilities.
In an exclusive interaction with THE WEEK, Millán spoke about the future of brain–machine interfaces, and critical issues such as protecting neural data and the need for robust frameworks to safeguard it. Edited excerpts:
Q: You pioneered non-invasive EEG-based BCI technology. My understanding is that you developed a shared-control system to make interactions between humans and robots more intuitive. How did this idea first occur to you in the late 1980s or early 1990s?
A: As with most things in life, if you pay attention to what’s happening around you, ideas come together naturally and seem obvious in hindsight. I’m talking about more than 30 years ago. At the time, I was researching autonomous robots—robots that could learn by themselves.
Even then, I felt that while these robots were impressive, something was missing: a way for humans and robots to truly collaborate. I began asking myself what real-world problems such collaboration could solve.
Around that time, I encountered someone who had been in an accident and was completely paralysed. That experience made me think this is exactly where robots could make a difference—helping people perform tasks they can no longer do themselves.
Q: Was this someone you knew personally?
A: Not really. It was a well-known journalist in Italy, where I was living at the time. He had suffered a serious accident—though I don’t recall the details—and it struck me deeply. I began to wonder: how could a robot help such a person regain independence?
That led to another question: how could someone who is completely paralysed communicate their intentions to a robot? That’s when I remembered reading about EEG and how it could record brain signals. I thought—this is the future.
The final piece came during a sabbatical at Stanford. On my last day, I visited a lab hosting an open house. I saw a very primitive portable EEG system—not wearable, but still portable—and everything clicked. That’s when I decided to pursue this line of research.
Q: Looking ahead, how do you see this technology evolving over the next few decades—especially in terms of redefining mobility and making wheelchairs obsolete, while fostering social inclusion?
A: It’s about much more than wheelchairs. I envision brain-controlled robots more broadly, including exoskeletons that can restore movement.
For example, if someone’s hand is paralysed but the muscles are still healthy, an exoskeleton could restore movement through a brain–computer interface. The real challenge is achieving precise control with non-invasive EEG.
This is where shared control comes in. A BCI can only decode a limited set of commands, but when combined with intelligent robotics, the system can interpret those commands in context.
If I want to pick up a cup, the robot can infer how to grasp it based on the environment. If it’s a phone, it will adjust its grip accordingly—perhaps from the side that allows me to see the screen. The robot’s sensors and AI enable this contextual understanding, much like a baby learning motor skills.
Users will also require some training—not years, but enough for the AI “avatar” to learn their intentions and adapt. Eventually, the brain will transmit only abstract intentions, and the AI will execute them efficiently, correcting errors using what we call error-related potentials.
Q: Since brain signals are faint and complex, how do you detect and amplify them accurately using non-invasive methods like EEG?
A: Amplification isn’t just about making signals bigger—it’s about making them richer and more informative.
We combine neurotechnology, neuro-engineering, and principles of human and machine learning to achieve this. One approach involves brain stimulation, which strengthens specific motor cortex regions identified by the BCI as critical for certain commands.
At the same time, machine learning models stabilise these signals by constraining variability, making them more consistent over time.
Once a person learns to control a single movement, the next challenge is scaling that ability to more complex tasks. For instance, everyday activities are often bimanual. If someone uses an exoskeleton on one hand while using their healthy hand, signals can interfere with each other.
So, we’re working on filtering and isolating signals to ensure one hand’s activity doesn’t disrupt control of the other. That’s key to making these systems robust and usable in daily life.
Q: How could BCIs help extend human healthspan—and potentially lifespan?
A: There are two main aspects.
First, mobility. As people age, movement declines. Soft exoskeletons—integrated into clothing—could assist movement without requiring brain control. Sensors embedded in fabrics can detect motion and provide support by amplifying or stabilising movements.
The second, and more important, aspect is cognitive health. Cognitive decline—especially memory loss—can significantly reduce quality of life and lead to social withdrawal.
Here, BCIs can help. The same principles used to decode motor intentions can be applied to cognitive functions. These “cognitive BCIs” can detect neural correlates of deficits and compensate for or retrain brain functions.
They can promote neuroplasticity—the brain’s ability to reorganise—helping re-encode lost functions in alternative pathways. Over time, these systems don’t just assist; they help restore cognitive abilities such as memory, attention, and executive function.
Q: Would AI play a role here—perhaps in decision-making or reducing stress?
A: Absolutely. Sensors can monitor a person’s environment, and AI can interpret this data to provide real-time cognitive support.
For instance, smart glasses could project subtle cues—names, past interactions, or relevant information—during conversations. An AI avatar could retrieve forgotten details or suggest useful information.
This reduces cognitive load and stress, allowing people to function more naturally and confidently as they age.
Q: When do you think BCIs will become widely accessible beyond research settings?
A: The turning point will likely come when health insurance systems begin covering BCIs for people with disabilities. That would create a large market, encouraging companies to develop more solutions.
As adoption increases, costs will fall, making the technology accessible to the general population. We may also see an app ecosystem for BCIs, similar to what exists for smartphones.
People without disabilities will begin using BCIs not to become “superhuman”, but to interact with technology more efficiently and naturally—with the brain remaining in control.
Q: BCIs generate vast amounts of neural data. Why is it important to develop frameworks to protect this information?
A: This is still an open question. We don’t yet fully understand how AI systems might exploit neural data or how far that could go.
What we do know is that such data must be anonymised so it cannot be traced back to individuals.
At the same time, we must recognise that behaviour is a consequence of brain activity. In theory, if brain activity could be manipulated, behaviour could also be influenced.
That’s why strong ethical and regulatory safeguards are absolutely critical as this field advances.