AI: Giant leaps in brain-computer interfaces


Facebook (as it was known before it entered the Metaverse) made headlines when it started funding brain-computer interface technology, looking for a way to allow users to create text just thinking about it. Facebook wanted to create a new way to interact with technology – a system where a user could simply imagine they were speaking, and a device would turn those electrical impulses into text.

Facebook hoped to create a head-worn device that could pick up language, translating electrical signals from the brain into digital information. Despite the intriguing premise of a social media giant potentially developing the first consumer interface for using language, the company decided to pull out of the project last year, releasing its existing language research into open access. and focusing on interfaces capturing movement related to nerve signals, rather than language.

While the American giant has withdrawn from the market, a number of laboratories are moving forward and making breakthroughs in the transformation of language into text or speech. These projects collect data directly at the source, using electrodes directly in contact with the surface of the brain. Indeed, unlike systems based on wearable devices, brain-computer interfaces that use implanted electrodes offer a better signal-to-noise ratio and allow much more detailed and specific recordings of brain activity.

First systems developed

Last year, Facebook’s research partner UCSF announced that its Chang Lab, named after neurosurgeon Edward Chang who heads the facility, had created a functional thought-to-text interface as part of a a research essay. The system uses sensors contained in a sheet of polymer that, when laid on the surface of the brain, can pick up neural signals from the user. This information is then decoded by machine learning systems to create the words the user wants to speak.

The first user of the system was a person who suffered a brainstem stroke, which left him with extremely limited head, neck and limb movement and an inability to speak. Since the stroke, he has had to communicate by moving his head, using a pointer attached to a baseball cap to touch letters on a screen.

Typically, signals travel from the brain to the speech muscles via nerves – think of nerves as electrical wires in the brain. In the trial participant’s case, the wiring had effectively been severed between the brain and the vocal muscles. When he tried to speak, the signals formed, but could not reach their destination. The interface picks up these signals directly from the brain’s vocal cortex, analyzes them to find out which speech-related muscles the participant was trying to move, and uses them to find the words they meant to say, converting those muscle movements into electronic speech. As a result, the participant can communicate faster and more naturally than he did in the 15 years since his stroke.

Increased precision

The test participant can speak one of 50 words that the system is able to recognize. The words were chosen by the UCSF researchers because they were common, related to care, or simply words the participant wished they could say, such as family, good or water.

To create a working interface, the system had to be trained to recognize which signals correlated to which words. To do this, the participant had to practice pronouncing each word almost 200 times in order to create a data set of sufficient size for the interface software to learn. The signals were sampled from the 128-channel matrix on his brain and interpreted by an artificial neural network, which uses nonlinear models capable of learning complex patterns of brain activity and associating them with speech. pronounced.

When the user tries to speak a sentence verbatim, the linguistic model predicts the probability that he tries to say each of the 50 words and how these words are likely to be combined in a sentence, in order to give the result final speech in real time. The system was thus able to decode the participant’s speech at a rate of up to 18 words per minute and with an accuracy of 93%.

The UCSF team now hopes to expand use of the trial system to new participants. According to David Moses, a post-doctoral engineer in Chang’s lab and one of the lead authors of the research project, many people are asking to participate in UCSF’s research on thought-to-speech interfaces. “You need the right person. There are a lot of criteria for inclusion, not only in terms of the type of disability of the person, but also their general health and other factors. It is also very important that “They understand that this is a research study and there’s no guarantee that the technology will directly benefit them, at least in the near future. It takes a particular type of person,” explains- he, interviewed by ZDNet.

Stunning results

Most of the arrays used in human trials of invasive interfaces — where electrodes are placed directly on the surface of the brain — are made by a single company, Blackrock Neurotech.

Blackrock Neurotech is also working on language applications for brain-machine interfaces. Instead of using signals sent to the speech muscles, as in the case of the UCSF trial, the company created a system based on imaginary handwriting: you mentally imagine yourself writing an “A”, and the system converts it to written text using an algorithm developed by Stanford University. The system currently operates at a speed of around 90 characters per minute, and the company hopes it will one day be able to reach 200 characters, roughly the speed at which an average person writes by hand.

This system, which is perhaps one of the closest to commercialization, is likely to be used by people with conditions such as amyotrophic lateral sclerosis (ALS), an incurable disease also known as Lou Gerig’s or motor neuron disease. In an advanced stage, ALS can cause a locked-in syndrome, where the person cannot use any of their muscles to move, speak, swallow, or even blink their eyes. At the same time, his mind remains as active as it always has been. Interfaces like those from Blackrock Neurotech are intended to allow people with ALS or lock-in syndrome, which can also be caused by certain strokes, to continue to communicate.

“We had instances where the neural interface spelled out a word that the autocorrector kept correcting, and participants reported that it was a word they made up from scratch when they started dating their partner. The neural interface was able to come up with a word that only two people in the world knew,” Marcus Gerhardt, CEO and co-founder of Blackrock Neurotech, told ZDNet. The system currently operates with an accuracy of 94%, which increases to 99% once autocorrection is applied.

The (critical) issue of cost

Although still in a relatively early stage of development, brain-machine interfaces have the potential to improve the quality of life of patients with conditions that currently prevent them from speaking. Although the technologies behind these interfaces have made great strides in recent years and have become faster at translating speech into words on a screen, there is still a lot of work to be done before the systems can be deployed to general patient populations.

It goes without saying that due to the novelty of brain-machine interface systems, privacy and data ownership concerns need to be addressed before large-scale commercialization. This type of interface being very recent, it is also necessary to know more about their long-term use. There is the practical question of how long the electrodes will remain functional in the electrode-unfriendly environment of the brain: Blackrock Neurotech’s arrays have been in situ in humans for seven years, and the company believes that ten years are possible.

There is also the question of long-term support, according to Mariska Vansteensel, assistant professor at the Brain Center at UMC Utrecht. Regular parameter adjustments will be necessary to optimize systems based on disease progression or other situations that may affect brain activity, as well as user preferences. Hardware may also need to be replaced or updated. At this time, there is no agreed framework for determining who should manage long-term interface support.

Perhaps the most pressing challenge for technologies like those from Blackrock Neurotech and UCSF is that they are aimed at relatively small patient populations. At the same time, the systems themselves are specialized and expensive, and installing them requires equally specialized and expensive neurosurgery. If brain-machine interfaces dedicated to language succeed in being commercialized, their cost could prevent them from reaching those who need them most.

Source: ZDNet.com





Source link -97