Your browser is out of date. Some of the content on this site will not work properly as a result.
Upgrade your browser for a faster, better, and safer web experience.

The billionaire, the pig and the future of neuroscience

“Gertrude? Are you serious?” It was surely the most awkward three-and-a-half minutes in the history of pig coaxing, and presumably not the spectacle Elon Musk had in mind when he kicked off his latest live demo with a coy “I think it’s going to blow your mind”. Nevertheless, the sight of the CEO of Tesla and SpaceX, reportedly the world’s second richest person, enticing a camera-shy sow to the front of her pen (“Snacks are this way!”) for the benefit of a perplexed socially distanced audience certainly produced its very own kind of cognitive dissonance.

The event, held on 28th August 2020 and live-streamed to the world, was intended to demonstrate the much-hyped progress made by Neuralink, a San Francisco company founded by Musk in 2016, on its ambitious real-time brain-imaging technology. And Gertrude, when she finally did amble under the studio lights, was revealed to be no ordinary porker. For two months she had been bumbling around as pigs do, seemingly oblivious to the futuristic implant lodged in her skull – a coin-sized device directly connected to her cerebral cortex via an array of threadlike tendrils attached to 1,024 electrodes implanted in her grey matter. The job of this head-mounted robot jellyfish was to monitor the neural activity across a particular region of the pig’s cortex – in this case neurons mapped to her snout – wirelessly relaying data that could be processed at a higher resolution than most electrode arrays currently on the market.

Ultimately, though, as Musk explained while a display behind him registered a blizzard of electrical spikes generated by Gertrude’s snuffling, the technology will be able to “write to the brain” as well as “read” its activity. The project’s aim is to develop a two-way connection that can precisely stimulate specific neuron clusters at the brain surface, which will help “solve” health problems ranging from paralysis, blindness and hearing loss to depression and addiction. “The neurons are like wiring,” Musk declared. “And you kind of need an electronic thing to solve an electronic problem.”

As an aside, he floated several of its more crowd-pleasing commercial applications: “It’s got all the sensors you’d expect to see in a smartwatch or a phone,” he noted. One day you could submit to a patented Neuralink surgical procedure and, mooted Musk, this tech could function as a “Fitbit in your head”, monitoring your health metrics as well as offering “convenience features like playing music” – by which he apparently meant bypassing the eardrums and streaming tunes directly into your brain’s auditory centres. “It can do a lot.” Except, it seemed, remote-control your pig.

“Over time we could actually give somebody super-vision,” Musk said during a Q&A following the demo – “you could have ultra-violet, infra-red, see in radar…” – before the conversation moved on to what he calls ‘conceptual telepathy’. Words are a clunky form of disclosure, he said, imparted at “a very low data-rate… speech is so very, very slow”. If we were one day all fitted with a neural link, Musk suggested, “We could have far better communication because we could convey the actual concepts, the actual thoughts, uncompressed, to somebody else.”

“From what I can judge these are very, very good electrodes and very, very good techniques of implantation,” says Dr Christian Herff of Gertrude’s new brain apparatus. Herff leads research into the connection of brains and computers at Maastricht University in the Netherlands. Having conducted recent ground-breaking studies in the field disconcertingly known as invasive brain-computer interface (BCI), he is impressed with Neuralink’s engineering: the fine-grain data resolution their prototype promises and the futuristic neurosurgery robot they are developing to perform the implants (Musk envisages “a fully automated system” that would allow human implantees to receive their chip in a one-hour procedure without general anaesthetic). But when it comes to the bolder future-gazing claims, Herff feels Neuralink might be hamming it up a little. “They talk about streaming music to the brain and controlling everything by thought alone, but what they’re really working on is developing better electrodes.” So far, he says, “they have presented little in the way of neuroscientific insights.”

Thinking out loud

Herff, by contrast, can give you plenty of those, especially in his own area of BCI research – decoding the words we’d normally emit from our mouths by instead looking at the flurries of neural activity they spark in our brains. Applying advanced machine learning and statistical methods to the brain-recording nous of neuroscience, Herff and his colleagues’ ultimate aim is to help those whose verbal communication has been impaired or nullified by conditions such as a stroke or brain injury as well as neurological disorders like locked-in syndrome, by allowing them to dictate directly from their brains. The success they have had so far in working towards a functioning ‘brain-to-text’ interface is remarkable, and demonstrates just how powerfully the AI revolution of the past few years has impacted on neuroscience.

In 2015 Herff was lead author on a paper in Frontiers in Neuroscience detailing a system that could match the words a person was enunciating to associated constellations of neurons firing in their brains. The signal patterns had been recorded by electrocorticographic grids (ECoG) – electrodes placed on the exposed brains of patients undergoing surgery for severe epilepsy. The test subjects were asked to read texts out loud; as each utterance created its own firework display in the brain, the activity was recorded. Repeating the process created data models that could be used to train a computer to recognise specific words from neural signals alone – and automatically reproduce them as written text.

One day, mooted Musk, this tech could function as a ‘Fitbit in your head’”

And this was not the kindergarten vocabulary you might imagine; instead the study’s designers, says Herff, “went for political speeches. The best-working subject actually read the Gettysburg Address by Abraham Lincoln.” Impressively, this high-minded brain-to-text trial got it right, as Lincoln might say, three-score and 15 percent of the time – that is, once trained the interface correctly decoded 75 percent of the neural data sets into their corresponding words.

Deciphering signals relating to speech in the brain is a dizzyingly complex procedure in people who are able to talk. But moving it on from these first steps to those whom the technology is intended to help is likely to be a leap in the dark. “Bringing it to people who can’t speak any more, this is really the stage where we don’t know all that much,” says Herff. “For example, we don’t know what happens to the speech areas in people that haven’t spoken for many years. They might have been totally repurposed – even older brains are still quite flexible and might repurpose these areas.” Then again, he says, “It might also work beautifully.”

However beautifully it turns out, Herff insists that a brain-to-text interface should never be mistaken for ‘mind-reading’ in the mystical sense. From a neuroscientist’s perspective, language and speech are two very different processes – and both are distinct from introspective thought, which remains largely inscrutable. Of the three, “we’re much closer to understanding the speech production process,” says Herff, whose method hinges on the spoken word’s more mechanical aspects. “When we want to speak, at some point the muscles in our faces need to be involved. In our latest work, this is what we’ve focused on: looking at the control of facial muscles and the tongue, and then trying to decode it.”

Complementing this sound-producing activity in the motor cortex is another fertile source of data: signal bursts skimmed from the brain’s temporal lobes (the areas adjacent to the ears, responsible for auditory language comprehension), which can offer further clues from the more elevated realm of linguistic concepts. “Because when I’m actually speaking,” explains Herff, “I hear my own voice; I process the speech that has been produced.” Combine these moving snapshots of brains in mid-articulation with AI speech-recognition algorithms, which work according to much the same principles as those already in everyday use – in virtual assistants, for instance – and you have a solid basis for reconstructing what someone might be trying to say from their brain signals without hearing or seeing them say it.

“All speech-recognition algorithms usually incorporate something about language as well,” adds Herff. “So we have the neural model that says: ‘For this point in time, the most likely phoneme was “R” or “E” or something.’ Then you have a language component as well that says: ‘After “I” it’s very likely that the next word will be “am”, and very unlikely that the next word will be “are”. We also have a pronunciation dictionary, which tells us which words we can already recognise” – this is used to help the computer link individual utterances into distinct words. The algorithm might identify the brain patterns for “l”, “ih”, “b”, “er”, “t” and “iy”, for example; it would be the pre-loaded pronunciation dictionary’s job to combine them into the word “liberty”.

The appliance of neuroscience

If they haven’t already, the neurolinguists might want to add “b-l-i-m-e-y” to that standard lexicon. Advancing the multi-source brain-to-text model, another group, led by neurosurgeon Edward Chang at the University of California in San Francisco (UCSF), published research in March 2020 in which average word-error rates ran as low as three percent.

In July 2020, meanwhile, a team at Stanford University led by Francis R Willett targeted a completely different region of the motor cortex to convert imagined handwriting movements into virtual typing. Relying on a much smaller set of signals than those involved in speech, Willett reported that “our study participant (whose hand was paralysed) achieved typing speeds that exceed those of any other BCI yet reported: 90 characters per minute at greater than 99 percent accuracy with a general-purpose autocorrect.”

People report an urge to move, and if you turn the dial up they actually make a movement”

Such impressive results were possible because communicating by hand is, neurologically speaking, a much more streamlined process than talking. About 50 percent of our motor cortex is dedicated to controlling our facial muscles, while hand movements are associated with a localised, relatively small area on the brain’s surface, the majority of whose signals can be Hoovered up by a single electrode grid.

Progress in the field has been made at such a blistering pace that Herff is confident a system for capturing imagined speech in healthy participants could be up and running within the next five years. Patients with severe speech impairment might have to wait a good deal longer, though. “Brain physiology is really different between people,” Herff explains while holding up a facsimile of his own expert grey matter produced by the 3D printer standing behind him. (He 3D prints scans of his students’ brains whenever he gets the chance, to illustrate just how widely the physical structures vary from head to head.)

Indeed, his group has never yet managed to apply data from one patient to another, and the invasive nature of making intracranial recordings via ECoG means trials have always been restricted to those with a pre-existing neurological condition, which, he admits, might itself affect the findings. “All of these studies have ten patients or less,” says Herff.

Finding a way to create generalised neural models with reliable, predictable results from patient to patient – and that will work as effectively for those whose speech function is impaired – will need, quite literally, a lot more brains. “Maybe when we have 1,000 patients, then we might get to a stage where we can transfer the data from one to the other.”

One area in which hands-on applications have begun to take shape in recent years, however, is neuroprosthetics, another field of bioengineering in which motor signals are decoded and reconstructed by computer. Once modelled from neural data, patients’ intended motions are then mimicked in artificial joints, hands and limbs. With this kind of approach in 2018, researchers in Lausanne were able to stimulate muscle control in the legs of three paralysed patients  using electrode arrays placed on their spinal cords that replicated their brain signals for walking.

Moving from the practical to the more philosophical and speculative edges of neuroscientific research, the results can get surreal and often faintly disturbing. Some invasive studies, for instance, have triggered out-of-body experiences – “because you’ve stimulated that part of the brain that might be to do with the representation of where you are in your own body,” says Dr Paul Taylor, a cognitive neuroscientist based in Munich who works on brain-activity recording and brain stimulation using non-invasive techniques. Alarmingly, stimulating other areas can seemingly trigger people’s sense of free will. “People start reporting that they have an urge to move, and then if you turn the dial up they actually make a movement,” says Taylor.

One technique applied to brain scans called ‘multivariate pattern analysis’ has even been used to decode people’s dreams. In 2013 researchers in Tokyo let their subjects fall asleep inside fMRI machines (a non-invasive technique for producing low-resolution brain images which works by tracking blood-flow in the brain, measuring the uptake of oxygen from neurons as they fire). They then asked them what they were dreaming about. By repeating this process over and over, they trained their computer to associate patterns of neural activity with specific categories of visual imagery during sleep – the researchers were then able to predict their participants’ dreams from a list of 200 dream reports with 60 percent accuracy.

Miguel Nicolelis, meanwhile, a Brazilian pioneer of BCI, envisages a future of brain-to-brain interaction in which we can access each other’s neural activity directly, to communicate and to enhance our cognitive abilities. In 2015 he published a paper in which he reported having linked the brains of four rats via implanted electrodes in a rudimentary “organic computer”, which he called ‘Brainet’. The linked animals were able to send and receive signals to each other in the areas of their cortex that process tactile sensations and over a series of training sessions the rats learned to synchronise their neural activity when performing tasks. When pooling their cognitive resources, wrote Nicolelis, the rats “consistently performed at the same or higher levels than single rats in these tasks.”

Harmonised rodents are a long way from a Vulcan mind-meld, of course. But given the pace of change in neuroscience to date, in a few decades’ time is it really all that far-fetched to picture lives organised not around an Internet of Things but an Internet of Thinks?

The only way is ethics

“The whole enterprise of recording the brain in order to decode the contents and nature of the mind is something that we should be being really careful with, and I don’t think we are,” says Dr Pim Haselager, a philosopher specialising in tech ethics. “We’re kids enjoying the possibilities.”

Haselager, who teaches on the implications of artificial intelligence, robotics and cognitive neuroscience at Radboud University in Nijmegen in the Netherlands, is keen to stress some of the potential benefits of BCI as its clinical applications develop. “One of the things that I like about this [brain to text] technology,” he says, “is that it’s about me consciously producing the sentences that I want to produce.” Even if communication-by-thought were vulnerable to being intercepted and decoded in the same way online messages are today, we would still be in complete control of what was being said. In its current form, for instance, neuroscience that interprets speech intention wouldn’t be able to discern if you were lying.

You can do emotion, character traits… How far do we want to enter people’s minds?”

Even so, Haselager also believes that surveying the wider world of brain-monitoring science pitches up serious ethical quandaries. With powerful enough machine-learning tools, he says, researchers could feasibly advance a number of fields into concerning new territory. You can conduct research on memory, he suggests; “you can do emotion, behavioural dispositions, character traits, capacities like aggression or self-control, sexual preference.” Taken all together, including the seemingly more benign lines of clinical research, for him it raises a big question: “How far do we want to enter people’s minds?”

If it’s possible introspection could one day be open to inspection, Haselager believes public discussion of BCI should be a priority right now. “You don’t want to wait until the technology is out there being used before debating ethical concerns,” he warns, “because that means you’re too late. And we’ve seen that in relation to the internet and privacy.”

Dr Pim Haselager

Throwing these worries into sharp relief is the fact that much of the impressive progress made by Edward Chang’s team at UCSF has been funded by Facebook. In a February 2020 talk at a Silicon Valley wearable tech conference, research director of Facebook Reality Labs Mark Chevillet updated the audience on his company’s BCI programme and lauded the results seen to date by its collaborators at UCSF. But he also stated: “We’re not interested in anything involving implants – so we have some hard work to do on our side to figure out how we’re going to do the same thing using a completely non-invasive interface.” In order to make BCI applications “relevant to consumer electronics”, he explained, Facebook is pursuing techniques of infrared scanning from outside the skull. In his presentation, Chevillet couched Facebook’s BCI project as “an ambitious, long-term, probably ten-year research programme” to envision “the next-generation personal computing platform after the smartphone”.

For some working in the field, the intensity of interest from big tech firms is a source of discomfort. “This is certainly not the type of access you would want them to have,” says Herff. “On the other hand, the results with non-invasive technology on this are really limited right now… and measurements that would be in any way critical are very, very limited.” While this might be reassuring in the short-term, “By the time people like Elon Musk and Facebook start pouring in the millions, it tells me something,” says Haselager, weighing the possibility of wearable tech that can sift for commercially meaningful neural data. “They’re in there for some sort of profit, and something tells them that this is going to be achievable. It might be a long shot but it’s on the cards… I wouldn’t be surprised if it really happens in the next couple of decades, that it becomes practically usable.”

And what then? Haselager thinks we should be wary of giving analysts and advertisers any hard-physiological hints about our behaviour and beliefs. “If you give away brain data, you don’t know what can be mined from it in another decade or so,” he warns, offering a hypothetical example: “Maybe in that data there’s already an emotional shading that we currently can’t access” – that is, some nuanced aspect of the recorded brain activity that reveals more than we realise about our inner motivations – “but we might discover that, by a better algorithm ten years from now, we can actually access it.”

Taylor, by contrast, believes these are depths that are unlikely to be fathomed, even taking the long view. Those parts of our inner world that might need protecting, he says, are “at a semantically rich, often symbolic level – where you’re thinking about your phone number, your secret sexual urges, your private demons or your credit card number. But what we actually get from this level of brain analysis are things like intentions of movements. Or possibly which scene you’re imagining if you’ve been given a shortlist,” as in the Tokyo dream experiments. “To be able to decode something as sophisticated and rich as human experience feels like it’s not just a quantitative jump but a qualitative one.”

“The more you realise how difficult it is to decode this stuff,” adds Taylor, “the less you worry about anyone being able to succeed in getting something as interesting as your actual opinions. That just seems to be inconceivable.” Although he adds a caveat: “There have been several developments in the last 20 years where I’d thought, ‘No that’s not going to happen,’ and they have happened.”

Facebook is pursuing techniques of infrared scanning from outside the skull”

As for Musk’s claims about the brave new frontier in Pig Data, Taylor shares the view of many scientists working in the field, Herff included, that they could indeed represent a step-change on the technical side, even if they haven’t yet advanced the neuroscientific community’s understanding of how brains – pig, human or otherwise – work. Taylor is ultimately optimistic, though, about Neuralink’s high-profile project and the brain-power and funding it’s likely to attract to the sector.

“Think about all of the ancillary innovations that came out of the space race, just because you had that many people working on rockets. The equivalent is going to happen in neuroscience. If you’ve got Musk’s world going crazy on this, then in a couple of years there’s going to be all sorts of other products.” It might be a power source that’s been miniaturised and intended for implants that revolutionises the rest of everyday electronics, he suggests, or the “robot sewing machine” Neuralink is working on for surgical implantation that has the greatest immediate impact in the medical sphere.

“Personally I very much doubt this is going to lead to the iPod in your head or the Fitbit in the brain,” says Taylor. “But I think it’s going to be really wonderful just for getting stuff done.” Either way, Musk’s millions are likely to accelerate a field of enquiry that’s already moving at breathtaking speed – and there’s good reason to get excited about the applications that might emerge in the next few years. Not to do so, in fact, would seem just a little pig-headed.

We hope you enjoyed this sample feature from issue #40 of Delayed Gratification

You can buy the issue from our shop or

Subscribe and receive the magazine through your letterbox every three months

More stories...

A slower, more reflective type of journalism”
Creative Review

Jam-packed with information... a counterpoint to the speedy news feeds we've grown accustomed to”
Creative Review

A leisurely (and contrary) look backwards over the previous three months”
The Telegraph

Quality, intelligence and inspiration: the trilogy that drives the makers of Delayed Gratification”
El Mundo

Refreshing... parries the rush of 24-hour news with 'slow journalism'”
The Telegraph

A very cool magazine... It's like if Greenland Sharks made a newspaper”
Qi podcast

The UK's second-best magazine” Ian Hislop
Editor, Private Eye
Private Eye Magazine

Perhaps we could all get used to this Delayed idea...”
BBC Radio 4 - Today Programme