Facebook is working on a brain-computer typing interface

by Mark Tyson on 20 April 2017, 12:01

Tags: Facebook

Quick Link: HEXUS.net/qadglm

Add to My Vault: x

Facebook's annual F8 conference in San Jose, California ended yesterday. The event included two days of interactive demos, announcements, and developer advice to get the best from Facebook. Among the expected social networking and apps features were some interesting technology and 'moon shoot' style presentations, many about VR and languages. However, probably the most interesting technology discussed was a brain-computer interface for typing, plus skin-hearing.

Penny for your thoughts

We hear that Facebook currently has 60 developers working on a brain-Computer interface. The ultimate aim is to allow users to type at about 100 words per minute, just by thinking what they want to write. Mark Zuckerberg said that the project "will one day allow us to choose to share a thought, just like we do with photos and videos".

Facebook's brain-computer interface is thankfully non-invasive, relying upon "optical imaging to scan your brain a hundred times per second to detect you speaking silently in your head, and translate it into text," reports TechCrunch. Regina Dugan, the head of Facebook's R&D division Building 8, said work on the non-invasive brain-computer interface begun 6 months earlier. It seems to be based upon current works at Stanford where paralysed patients can type using a brain embedded sensor.

Plans are to mass produce and ship the non-invasive devices as and when they are ready (within 2 years is the goal). Meanwhile Dugan sought to allay privacy fears by saying that the device would only decode "the words you've already decided to share by sending them to the speech centre of your brain." (So during interrogation you must avoid saying things to yourself in your head like: "don't tell them the secret key is in the coffee jar"…)

Skin-hearing

In a complementary sensory development other researchers are working on skin-hearing. This interesting technology has been built into prototypes that allow for patches of skin to mimic the cochlea in your ear, translating sound waves into specific frequencies for your brain. This system has already been tested and only works with a very limited vocabulary so definitely requires hardware and software optimisation.



HEXUS Forums :: 15 Comments

Login with Forum Account

Don't have an account? Register today!
“Here you go everyone, a new ‘feature’ for those of you who can't be bothered to learn how to type.”

Because FB don't have a rich enough dataset on people's lives already, and wan't to be able to read your thoughts…
That's a big ole cup of ‘nope!’ for me.
All for this technology working out, and totally against it being in the hands of a social media platform or government.

I just want to open Pandora's Box and selectively pick out some biscuits, you know?

Edit: how much time will this cut from the current time required to take a picture of your meal / selfie and post something about it on social media for people to not care about? The mundane, now available at lightspeed.
it good give a route to a more honest “like/dislike” though couldn't it? Suddenly it knows whether you actually like that inane post by Sazza23 who you may or may not remember having met at that friend's BBQ last summer, and more importantly whether you give a damn that she LOLZ just saw a cat that looked like corbyn, kind of, if you squint. Oh, and these were my cherios in my bowl this morning…

Maybe they could add an automatic way to filter your friend list to real friends and merely acquaintances, and automatically send STFU posts to people who are getting on your nerves. It could then morph into deciding your permitted friend group. Sorry Dave, you're not allowed to interact with that person…. you just won't like them. Or worse, they just won't like you. Think of all the social interaction and strife it could bypass… and with all that saved time you have spare capacity for them to load adverts directly into your brain….
My existing brain-to-keyboard interface is still working fine, and I can use them for grabbing and punching as well.