that's really cool and
A welcome bit of good news.
Some smart cookies have implemented a brain-computer interface that can synthesize speech from thought in near real-time. Described in a paper published in Nature Neuroscience this week, the neuroprosthesis is intended to allow patients with severe paralysis and anarthria – loss of speech – to communicate by turning brain …
Keep this in mind.
Neural Tech will be like the Internet. It will work both ways and you will never be able to fully disconnect or turn it off. Mega corps will charge you for a dopamine fix on a subscription model.
In the future, users of this technology will be called:
- Plugged-in Peasants
- Cognitive-Cattle
- Mind-Muppets
- Synapse-Slaves
Or my favorite term: F****** idiots.
"This mid-coitus ad break is brought to you by..."
I'm not discounting the potential for a William Gibson-esque hellscape future, god knows I'm not going to get any sort of brain implant, but as it stands now this is a medical device for people who may have no other way of easily communicating or navigating the physical world. There are real drawbacks to medical technology, we should be having discussions about the lack of security in pacemakers, we should be talking about hearing aid companies that each release their own proprietary software barely held together with the NOAH standard and who EOL their products after a couple of years, just to release the exact same thing with a differently shaped interface connector, and we should be talking about the long-term health and security implications of this device, but I'd hardly call a paralyzed individual desperately trying to recapture human connection a shmuck
Also, frankly, I don't foresee a large adoption rate should brain-interfaces progress to the point of consumer availability. How many people bought the Apple VR helmet that doesn't require a hospital stay and your head being cut open?
You're taking a very useful tech, and rolling it all the way down to the most harmful possible point. Okay, there is potential for abuse, but this is like seeing the first car and going "This will be used to drive over people, and users will be called mass murderers".
Yeah, impressive stuff! It's also worth following the TFA link under "a statement", and from there, the second one at For more information: "How artificial intelligence gave a paralyzed woman her voice back"; it gives background and contrasts previous methods of communication that preceded this improved version.
The main thing here (I think) is near-immediacy that should help with auditory feedback and thus enhance speech production (eg. compared to speech of a deaf person, or speech with an 8-second delay). IIUC, that helps with "volitionally controlled modality", an "increased sense of embodiment", and "fluent speech synthesis".
That'd be a valuable application of AI/RNNs right there, imho (compared to some other, rather dubious, 'uses').
This would have been incredibly useful for Professor Stephen Hawking; I was fortunate enough to have met him once, and at the time (very early 2000s) his speech interface consisted of a grid of words/letters on a screen, with a hand-held clicker button to select the required ones; once a sentence had been thus composed it was sent as a serial ASCII stream to an (obsolete even then) ISA-bus synthesis card that took the serial data and composed it into phonemes, then sent them to a speaker.
And here we are, just over 20 years later, with near-instantaneous and natural-sounding speech decoded directly from the brain.
That's what I call progress.
It's not really mind-to-mind communication. It's more like a telephone. It's reading the nerve signals that are sent to the mouth to shape words, so if you used it on someone who wasn't paralysed, the synthesised voice would probably come out a little out of phase with them saying the same thing.
I think that is the moment where wrench manufacturers are shaking from fear.
There is no longer a need to break someone's knee to make them talk.
Just put the contraption on their head and say: "Don't try to speak the password in your mind." or "Don't try to imagine where you hid the rugs."
Like the pink elephant thing, don't try to imagine it!
Surely the scenarios you suggest would be at an earlier stage in the thought process...
"So what we’re decoding is after a thought has happened, after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal-tract muscles."
would suggest that the processing here is picking up the signals at the working out how to set up your mouth, throat and breathing for the words to come out stage rather than the actual thinking about what you want say stage.
Take into account that first computer was the size of a building, now there are computers smaller than grain of rice and faster.
I think this is also where "prompt engineers" could find employment.
"Don't try to imagine how you would say the password. Don't even try to say it in your mind. I won't tell you what is going to happen if you try, so just don't. Unless you want to find out the prize?"
Allegedly men think about sex 17 times an hour or so depending on the various studies. (Well count me as an outlier on the very low end, if those figures are correct!)
Firstly, these thoughts could be vocalised unimpeded by this technology and cause much mayhem. (I'm glad that I don't have to walk around an office full of beautiful men and women, but I have a filter in my mind that discards such thoughts - if I do indeed have them. The book "The Chimp Paradox" has a good discussion on the filtering.)
Secondly, it might actually come up with a consistent figure for the frequency people think of sex (which I am sure is currently guestimated far too high.)
Icon: Scientist, as I'm sure that someone will get a grant to study this.
"these thoughts could be vocalised unimpeded by this technology and cause much mayhem."
I think the processing here is on the output side of that filter rather than the input, so you can fortunately still think stuff without it getting picked up and synthesised.
Really it's looking at the part of the motor cortex that controls speech (vocal chord muscles, etc.), i.e. the mechanics of speech. I suspect it is a somewhat easier (although still hard!) job to model the intended activity of the vocal tract from that, and hence the phonemes that would have been produced, than the far harder task of deciphering arbitrary thought patterns into a rich vocabulary higher up the chain.
This post has been deleted by its author