This story is part of CES, where CNET covers the latest news on the most incredible tech coming soon.
In my Oculus Quest VR headset, I was in a room surrounded by large-brained aliens. Their heads flashed, white and black. I turned to one, staring at it. Soon enough, its head exploded. I looked at the others, making their heads explode. Then I looked at a flashing portal marker across the room and was gone. I did this without eye tracking. A band on the back of my head was sensing my visual cortex with electrodes.
I felt like I was living some sort of real-life virtual version of the David Cronenberg film, Scanners. But in reality, I was trying a neural input device made by NextMind.
Before holiday break, I received a large black box with a small package inside. A black disc, with a headband. The disc was covered in small rubber-footed pads. NextMind’s $399 developer kit, announced a year ago at CES 2020, aims at something many companies are striving for: neural inputs. NextMind aims to read a brain’s signals to track attention, control objects and maybe even more.
It’s hard to understand the real potential and possibilities of neural input technology. Also, many of the startups in this space are doing different things. CTRL-Labs, a neurotechnology company acquired by Facebook in 2019, developed an armband that could send hand and finger inputs. Another company, Mudra, is making a wristband for Apple Watch later this year that also senses neural inputs on the wrist.
I wore an early version of the Mudra Band a year ago, and experienced how it could interpret my finger’s movements, and even roughly measure how much pressure I was applying when I squeezed my fingers. Even more weirdly, Mudra’s tech can work when you aren’t moving your fingers at all. The applications could include assisting people who don’t even have hands, like a prosthetic wearable.
NextMind’s ambitions look to follow a similar assistive-tech path, while also aiming for a world where neural devices could possibly help improve accuracy with physical inputs — or combine with a world of other peripherals. Facebook’s AR/VR head, Andrew Bosworth, sees neural input tech emerging at Facebook within three to five years, where it could end up being combined with wearable devices like smart glasses.
My NextMind experience has been rough, but also mesmerizing. The dev kit has its own tutorial and included demos that can run on Windows or Mac, plus a Steam VR demo that I played back on the Oculus Quest with a USB-C cable. The compact Bluetooth plastic puck has a headband but can also unclip from that and attach directly onto the back of a VR headset strap with a little effort.
All of NextMind’s experiences involve looking at large, subtly flashing areas of your screen, which can be “clicked” by focusing. Or staring. It was hard to tell how to make something activate, and I found myself trying to open my eyes more, or breathe, or concentrate. Eventually, sooner or later, the thing I was looking at would click. Out of a field of five or so on-screen flashing “buttons,” this really did know what I was looking at. And again, there was no eye tracking involved at all, this just rested on the back of my head.
Did it make me feel uncomfortable? Uncertain? Oh, yes. And as my kid came in and saw me doing this, and I showed him what I was doing, he was as astonished as if I had performed a magic trick.
NextMind’s dev kit isn’t meant for consumer devices yet. The Mudra Band, while launching as an Apple Watch accessory via crowdfunding site Indiegogo, is also experimental. I have no doubt we’ll see more technology like this. At this year’s virtual CES, there was even a “neural mouse” glove that aimed to improve reaction times by sensing click inputs a hair faster than even the physical mouse could receive. I didn’t try that glove, but the idea doesn’t sound far off from what companies like NextMind are imagining, either.
Right now, neural inputs feel like an imperfect attempt at creating an input, like algorithms searching for a way to do something I’d probably just do with a keyboard, a mouse or touchscreen instead. But, that was how voice recognition felt, once. And hand tracking. Right now, NextMind’s demos really do work. I’m just trying to imagine what happens next. Whatever it is, I hope more exploding heads won’t be a part of it.