Reading my mind

Fascinating stuff. And, whoa. The inevitable march towards brain-computer interface continues! \”Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person\’s brain activity as actual images mimicking what they observe in real time. \”

We are rapidly moving from keyboard and mouse input – which, although we\’ve done it so long that it *seems* natural, but it is not – to spatial input; this is truly an astounding leap towards natural computing.

I applaud the application that this particular work is working towards (helping post-stroke patient with rehabilitation devices controlled by brain signals), but imagine a world where we don\’t have to interact with technology – and each other – through screens!

One of the many challenges is that although there is a standard model for brain architecture, everyone has their own variation, so there are no specific templates that can be applied. No doubt there will be a \”training\” period for the interface. But once \”trained\” our personal brain reader will be able to function across all interfaces; unless of course Apple and Microsoft put up the usual walled garden model (personal gripe, also true with VR headsets; this game only works with this system etc).

But inevitably, the early stage development is paid off, enough people adopt, the squinky convoluted hoops early adopters need to jump through are ironed out, and mass adoption takes off. And while I realize that true brain computing interface is a long way off, I\’m heartened by all the work I\’ve seen by teams like this (CTRL-Labs in particular – interestingly, just bought by Facebook) . And hope that it will help the quality of life for both patients with limitations, and mundane every day life.

https://techxplore.com/news/2019-10-neural-network-reconstructs-human-thoughts.html

\"undefined\"

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top