Well, I have certainly heard of a UI (User Interface), GUI (Graphical User Interface) and a CLI (Command Line Interface), but this is my first time coming across something called a Brain Computer Interface.
What exactly is a BCI? Brain Computer Interfaces are devices where electrical brain activity is translated to external output. These sensors capture electrophysiological signals, which measure voltage and intensity of each neuron “spike,” from the brain and render that information to an external source such as a computer, which essentially helps translate thinking into actions.
These chips are surgically placed under the scalp or implanted close to the brain tissue. The closer this chip is placed to the brain, the higher-definition or advanced the signal will be. These invasive BCI techniques are used mainly by patients trying to recover from paralysis or other accidents. There is another, non-invasive way to put BCI chips to the test: wearable devices. Wearable chips are used by users invested in virtual gaming and robotics.
Despite years of research, these chips remain in the testing phase, their main application being moving a cursor just by a thought. Scientists hope to expand research to help patients with paralysis and other motor dysfunctions regain normal functioning. This market is expected to grow to an exciting $6.2B by 2030, opening doors for extended research and product deployment, specifically focused on developing comfortable, reliable hardware to carry out chip functions.
Let’s talk more about the anatomy of the BCI system, which has four main components to it: signal acquisition, feature extraction, feature translation, and device output. The first step, known as signal acquisition, functions to receive, refine and amplify electrophysiologic activity from the brain, which will be digitized and transmitted to the computer. The second step, feature extraction, involves analyzing brain activity to identify significant characteristics, or features, to be used to command the BCI system. Common features include the strength and timing of brain responses, power in specific frequency bands, or firing rates of neurons. To promote accuracy, irrelevant signals from outside sources are removed. The third step, known as feature translation, passes the processed features through an algorithm to control the output device. The last step is the device output itself, where the translation algorithm operates on the device to provide specific functions.
There are a variety of real-world applications where these interfaces translate brain signals into commands for devices. For example, Neuralink, powered by Elon Musk, is developing a small, coin-sized chip for implants to track brain signals precisely. This application’s focus is to treat patients with paralysis. Additionally, Neurable is developing headphones to monitor and boost brain productivity levels throughout the day. “This wearable device also mutes notifications, activates noise canceling and turns on ‘Do Not Disturb.’ It also tracks how different songs and genres impact a users’ focus, then recommends personalized playlists and suggests breaks” (Becher).
In conclusion, BCI represents a new, highly-anticipated market for advancements in healthcare. It is projected to have significant potential in improving the lives of those affected by motor diseases and enhancing technology.
Sources:
Becher, Brooke. “Brain-Computer Interfaces (BCI) Explained.” Built In, July 2023, builtin.com/hardware/brain-computer-interface-bci#:~:text=Brain%2Dcomputer%20interfaces%20enable%20humans,form%20of%20a%20wearable%20device.
Shih, Jerry J, et al. “Brain-Computer Interfaces in Medicine.” Mayo Clinic Proceedings, U.S. National Library of Medicine, Mar. 2012, www.ncbi.nlm.nih.gov/pmc/articles/PMC3497935/.
Written by Nidhi Kulkarni from MEDILOQUY
Amazing article
Execllent Article!