Link Essay on Core77
Link Essay on Core77
Essay written for Core77, as part of a year long series on the future of Interaction Design.
Making a case for the designers’ role in shaping preferable futures for nascent technologies, using the example of Brain-Computer-Interfaces.
When Brain-Computer Interfaces Go Mainstream, Will Dystopian Sci-Fi Be Our Only Guidance?
A look at how we can redirect negative feelings towards BCI to shape a preferable future
"I enter the subway. It's crowded as usual around this time, but I manage to find a vacant seat next to a Talker—a man carrying on a conversation on his phone in public space. Only grandpas do that these days. I take out my ThoughtReader. I just got a new one last month, much more discreet than my old one, it fits right behind my ear. I hold up my smartwatch and open the ThoughtNotes app. I press the tiny switch behind my ear and feel a little tingle, a sign that it's connected. A small light blinks through my ear to indicate to others that I am focussed. I start jotting down some ideas. It can be a little messy sometimes especially when you are just forming the thoughts, but it's fast and I can easily clean it up on my computer at home. When I'm done, I press save and open the QuietChat app. I call my husband and we thought-chat about work and dinner a bit. I can sense he's tired. It's funny, I think afterwards to myself, I can't believe we used to have these conversations out loud…"
This scenario may sound like your average sci-fi story, but there is an important difference—this scenario is grounded in real research and current technological development. More importantly, this scenario is an initial sketch for a future vision that I wouldn't mind inhabiting.
When I set out to write an article about the near future of brain-computer interfaces (BCI), I was met with a lot of shivers and 'hm, good luck'-s and 'oh, scary!'-s. The public image of BCI is heavily shaped by dystopian scenarios as depicted in movies and series like Black Mirror. Whenever a new technological breakthrough in this field is presented, you can bet that all the doom scenarios are listed in the endless comment threats below. I understand the strong reactions to such an intimate piece of technology, but what about its promises?
Most BCIs were initially developed for medical applications. Some 220,000 hearing impaired already benefit from cochlear implants, which translate audio signals into electrical pulses which are then sent straight into the users' brains. Recently the industry was joined by Elon Musk who announced a $27 million investment in Neuralink, a venture with the mission to develop a BCI that improves human communication in light of AI. And Regina Dugan presented Facebook's plans for a game changing BCI technology that would allow for more efficient digital communication.
Whether you're ready for it or not, these are all signals that brain-computer interfaces won't just stay in the realm of neuroprostheses and entertainment, but could actually go mainstream. If we accept for a moment that people will continue to work on this technology and its capabilities will continue to improve, and if we assume that no-one is interested in living the doom scenario, then we can try to consider the real implications and possibilities of this technology and imagine a viable alternative. What does it mean for interactions with our devices, and more importantly, with each other? Could this be the ultimate interface—one that is invisible, seamlessly integrated into our minds?
First of all, what are these so called brain-computer interfaces currently out there actually capable of? The answer depends who you ask and whether or not you are willing to undergo surgery. For the purpose of this thought-experiment, let's assume that healthy people will only use non-invasive BCIs, which don't require surgery. In that case, there are currently two main technologies, fMRI and EEG. The first requires a massive machine, but the second, with consumer headsets like Emotiv and Neurosky, has actually become available to a more general audience.
Emotiv’s EEG headset
Rodrigo Hübner Mendes was the first to drive a Formula 1 car down a racetrack with just the power of his mind, using Emotiv's EEG headset. It was an impressive accomplishment, but "…it did take him several months of training to get reliable control" says Erica Warp, VP of product at Emotiv. Short term, a more scalable direction, she believes, is that of the so called passive interface. Giving computers awareness of our cognitive state allows them to respond accordingly. Think of parameters such as focus, engagement, interest and stress. Assuming we figure out the privacy issues, I am excited by the idea of contextually-aware digital companions that respond more in sync with our state of mind. Like a good co-worker, they would not disturb me with notifications if they sense I'm deeply focused. Or like a good friend, they communicate in a more soothing, relaxed tone if they notice I'm tired or stressed. And like a good teacher, they could adjust an educational approach dynamically according to my level of engagement. And there are companies out there, like QNeuro, that already actively explore this direction.
Although valid in specific situations, I don't feel that use cases like these would make us walk around with a headset all day. In order to imagine that scenario, we have to travel further into the future. Back to Facebook's presentation. If Facebook can pull off what it presented, to develop a non-invasive BCI that reads the speech centre of our brain at 100 words per minute, 5 times faster than typing on a smartphone, things get a little more uncomfortable. Most of that discomfort comes from the sensation that this might actually be the kind of form in which BCI gets adopted more broadly.
These further out future scenarios are impossible to predict, but therefore even more important to imagine. This is where the average dystopian sci-fi image lives, and we will want to take some of the valid concerns exposed in those images and provide a more humane, preferable alternative.
For example, what are the terms and conditions? I rationally know that typing something on my phone might not be that different from thought-writing something to my phone, but the privacy bells start ringing in my head. What does it mean to wear a device that can read my brain? In my ideal scenario, the device is entirely and fully mine. I pay for the device, rather than have my data pay for the device. I go through a period of training with the device, it learns how my brain communicates certain words, we get the difference between angry excitement and happy excitement. And after a week we are good to go. It is like a keyboard, a very personal keyboard, my keyboard that I can plug into any computer I find, because the computer knows how to interpret my brain-keyboards' output. The keyboard only connects locally, it's like Bluetooth, or whatever its future equivalent may be, and I only connect it when I use it.
Initially, the social impact may not even be that big. Say I have a coffee with a friend and I want to send a message to my colleague. I would need to activate my thought-reader and pull out a screen of some sort (a phone, watch or AR headset) and then, while doing most of the writing by thought, I would still closely watch the screen to avoid typos. And this whole ritual would most likely be considered just as rude as using my smartphone in the same scenario now. So while we think of BCIs as being highly invisible, I expect that the initial usage would still be reasonably transparent and visible to those around us.
The big social disruption likely lies even further out, but will be directly influenced by the way the first mainstream BCIs are designed. Imagine a future in which we can not only read signals from the brain but also write signals back to it. In this future, imagine Augmented Reality is pervasive—digital information can be overlaid on your visual perception at will. We can have very private conversations in public space. It might be hard to tell the difference between someone daydreaming or thought-writing. However, like the little light behind the ear of the protagonist in my scenario, designers might end up creating purposeful signs to show when someone is using a BCI and manage to avoid rude or otherwise uncomfortable social situations. Those are exactly the kind of mundane details that could define the difference between a dystopian and utopian future.
Although it is hard to tell where exactly these technologies will take us, people right now are working hard to make them a reality. Whether they will become mainstream is more a question of when than if, but when they do, my biggest concern will be how. Currently the biggest push comes from the medical, neuroscience and technology industries, and only few designers have shared visions for the possibilities of BCIs outside of assistive or diagnostic medical tech. In their unique position to represent the final user and consider downstream social implications, they could add meaningfully to the creation of a positive future vision. I believe it's important that designers help shape the future of our brain-computer interactions sooner rather than later and guide the way past dystopian visions to the promise of these technologies without their negative consequences.
zazazuilhof [at] gmail.com