Source : Mouser Electronics
The last century has seen many great leaps forward in medicine, and scientists worldwide are diligently searching for the next one. To many of those researchers, the most promising path to the next medical breakthrough is understanding how the human brain works and then communicating with it in its own signals. The brain is an organic supercomputer that tells the rest of the body how to function, interpreting the data captured by our senses to allow us to interact with the outside world.
Reading Our Thoughts
Modern electronic systems can capture and interpret the signals from the brain and then use them to act in brain-computer interfacing (BCI) or brain-machine interfacing (BMI). This technique is relatively new, but progress is being made quickly. Like many other electronics-based solutions, the theory is quite simple: Sensors capture the electrical signals from the brain; these signals are conditioned and processed to generate a control signal, which is then sent to a device or application (Figure 1). That output can be a vast range of systems, including computers, machinery, or even other parts of the human body.
Figure 1: A simple block diagram showing the main areas of a BCI implementation. (Source: Moe 2022, redrawn by Mouser Electronics)[1]
Of course, putting that theory into practice is much more difficult. We already have accurate ways of reading the brain’s activity. However, interpreting what those signals mean is much more complex, mainly because a signal to perform a single action can look quite different depending on the patient’s consciousness and many other variables. Achieving the process in real time with non-intrusive equipment is much more difficult.
Signal Capture
Different types of signals can be captured from the brain; by interpreting these signals, we can tell what actions the brain is attempting to take.
Three general methods exist for capturing the signal:
• Invasive: Electrodes are implanted directly into the brain. This method achieves readings with the best resolution but involves health risks. Additionally, scarring can occur, which can dampen the signals or move the electrodes into non-optimal positions.
• Semi-invasive: Electrodes are implanted under the skull but outside the brain. This results in good resolution and fewer risks to the patient’s health.
• Non-invasive: Electrodes are attached to the top of the scalp. No surgery is required, but the signal resolution is lower than the other methods.
Different techniques are used in each of these categories depending on the signal being measured. For example, non-invasive techniques include electroencephalogram (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). However, these methods are not considered BCI, as they only record the brain activity and do not use the results to generate an actionable output.
In general, BCI systems require small, cost-effective, and unobtrusive equipment. For example, implants would be ideal if the health risks could be overcome. On the other hand, fMRI is highly accurate but requires bulky and expensive equipment.
Regardless of the technique, these systems capture brain signals between 1Hz and 100Hz, which can be broken down into bands that correspond to the patient’s state of alertness (Figure 2).[2] For example, the alpha band, roughly between 8Hz and 13Hz, signifies the patient is relaxed, while the gamma band, between 30Hz and 100Hz, shows strenuous muscle activity and a high level of sensorial inputs. The signals are of magnitudes of between 10μV and 100μV.
Figure 2: Different kinds of waveforms are produced by brain activity. Frequencies are approximate. (Source: Vallabh soni/stock.adobe.com)
Conditioning
After the signal has been captured, it must be conditioned to allow useful information to be extracted and processed. The signal is initially amplified, and any frequencies not necessary for the classification are filtered out along with noise. The signals are then digitized for processing. Signal conditioning is important, as the captured signals have a very low signal-to-noise ratio. After the signal has been cleaned, the features that can help interpret the intended actions of the brain are easier to determine and extract.
Feature Extraction
Extracting features is possibly the most challenging stage of the process, as there are a host of variables attached, and the signals can often change depending on many factors (including the subjects’ attention, mental states, and even anatomical differences between different subjects). In addition to extracting the relevant parts of the measured signal, the feature extraction process transforms the signal to derive new features, increasing efficiency and providing higher accuracy. Different types of transforms are better suited to different applications. Among the most common transforms for feature extraction are wavelets, fast Fourier transforms (FFT), linear discriminant analysis (LDA), principal component analysis (PCA), empirical mode decomposition (EMD), and self-organizing maps (SOM). [3]
As with the signal capture stage, a careful balance of power and performance is required to select the suitable component for this task. In some applications, a PC will be required to provide the processing power; in others, modern microcontrollers—especially those with AI capabilities and complex mathematical functions—could accomplish the task.
Classification
The classification stage of BCI typically uses a translation algorithm to convert the user’s desired action, derived from the feature extraction stage, into an output control signal for the target device using a dataset. These datasets are usually available from earlier studies. The extracted features may be classified by frequency and shape based on linear and nonlinear methods. It is not an exact science, and the user, the dataset, or both need to be trained to reduce errors and maximize accuracy. The development of translation algorithms relies on classifiers like k-nearest neighbor (kNN), linear discriminant analysis (LDA), neural networks, and support vector machine (SVM).
The computing power and computation speed required for this stage depend on which classifiers are used. A linear classifier will be fast and use less power and computing resources, but it will not be as accurate. Nonlinear classification techniques will offer more accuracy at the expense of time and power. The optimal method of classification will depend on the application.
Output
There are many ways of outputting the signal. It can control a cursor on a screen, as Neuralink has recently demonstrated with a quadriplegic patient operating a complex CAD program and successfully playing video games,[4] or it can control prosthetics and other machinery. Many existing applications enhance patients’ lives, and more are in development. Perhaps the most exciting and transformative examples will be where the output is converted back into the brain’s signals to control parts of the body.
Neuromodulation is a well-understood medical technique that can send or block signals to the nerve cells, or neurons, in some regions of the body. It was first used as deep brain stimulation in the early 1960s to tackle chronic pain. Today, pain relief remains the most well-known application for neuromodulation, but there are many others, such as spinal cord stimulation to help patients with Parkinson’s disease improve their mobility. [5] The number of neuromodulation treatments and their sophistication are growing as advances in electronics and our understanding of how the brain works improve.
There are two distinct types of neuromodulation: electrical and chemical. In electrical stimulation, a pulse generator and power source are combined to apply the treatment to the brain, the spinal cord, or peripheral nerves. In chemical stimulation, pharmaceutical agents are applied precisely to the area where they are needed. In either method, the treatment can inhibit pain signals from the brain or stimulate neural impulses.
Talking Back to the Brain
So far, in this article, we’ve looked at the single-lane route from the brain to the application, but the application can also communicate directly with the brain. Understanding the brain’s signals also makes it possible to spoof the signals usually sent by our senses to make the brain understand what is happening in the body and the outside world. In some illnesses, the link between the senses and the nervous system is broken. For example, many patients with hearing loss have an intact cochlear nerve, which supplies auditory signals to the brain, but they also have damage to the inner ear that stops the passage of sound signals.[6] Cochlear implants (Figure 3) have been developed to bypass the damaged area and supply a representation of the sound to the brain. The sound is picked up externally by a microphone; it is then processed and transmitted to a receiver, which converts the sound to electrical signals and sends it to an electrode array that delivers representations of sound directly to the auditory nerve, allowing the person to recover some hearing.
Figure 3: A representation of a typical cochlear implant. (Source: Pepermpron/stock.adobe.com)
Future Trends
The range of techniques described in this article highlights the progress made in interacting with the brain—whether reading the brain’s signals to control a machine, using neural communication to connect with other areas of the body in neuromodulation, or directly interfacing with the brain. It’s a matter of time before these techniques combine to provide a completely bidirectional system. For instance, when a patient’s spinal column is damaged, the damaged part of the nervous system could be bypassed completely to send the information required to control the limb. When that is achieved, a return signal could be sent in the opposite direction to the brain to provide feedback. This process would allow the patient to move the limb and feel sensations they may not have before.
This type of treatment may be quite far in the future, and each part will probably be developed independently. The feedback system to the brain will likely be used first for prosthetic BCI applications long before we have designed systems capable of delivering the functionality required for BCI techniques to accurately replicate the motor skills required for the movement of a limb in a patient with a damaged spinal column.
The technologies that will enable these advances to happen are currently either in development or already here. Sensors that can capture brain activity and the analog solutions needed to filter, amplify, and convert the signals are constantly improving. In the future, these analog systems may become unnecessary, as some researchers are exploring ways to directly extract features from the raw measured signal and perform classification using AI in a single step, potentially bypassing traditional conditioning processes. As our understanding of AI progresses and the technology’s hardware and algorithms are improved to use less energy and computing power, implants will become smaller and more viable, providing patients with more accurate solutions that enhance their lives further while having fewer drawbacks.
To learn more, visit www.mouser.com