Calls, noise reduction, sound reproduction … wireless headphones are getting ready for a giant leap


The French manufacturer Orosound is working on the integration of artificial intelligence in its future headsets to allow them to better manage calls and noise reduction or to adapt the sound reproduction to the music listened to.

The Orosound Tilde Pro helmet

The Orosound Tilde Pro helmet // Source: Orosound

Today, we are talking about Web3, blockchain, NFT or metaverse. But a few years ago the buzzword in vogue was artificial intelligence. This does not mean, however, that AI has disappeared from the world of new technologies. On the contrary, it has gradually integrated itself into all the devices more or less discreetly. Today, most smartphones use AI to generate photos that you will love at a glance. We obviously find AI in voice assistants. And even products as simple a priori as headphones are not spared.

How? ‘Or’ What ? Headphones need artificial intelligence? A priori these two universes seem very distant. It must be said that headphones are above all transducers connected to an analog source and which will simply reproduce the sound emitted by this source. But with the evolution of the market, driven by noise reduction headsets, wireless headsets, DACs integrated directly into headsets, manufacturers can now directly integrate a dose of machine learning to their devices. This is what Eric Benhaim, co-founder and CTO of Orosound, told us. This former signal processing engineer at Parrot now heads the design of modular noise reduction headphones at the French company founded in 2015.

AI for better quality calls

Orosound may offer mainly headsets intended for professionals, in call centers, the initiative of the French manufacturer is particularly interesting in that it is in fact based on artificial intelligence for several aspects. First, since its headsets are oriented towards calls, Orosound offers AI for communication. “Artificial intelligence will be used to make the algorithms really adaptive according to the context of use and the listening of the person”, explains Eric Benhaim. Concretely, depending on the user’s environment, it will not be easy for the microphones of a headset to analyze which noises to filter out and which voice must pass this noise reduction filter. This is also one of the main challenges of wireless headphones offered by many manufacturers who struggle to erase ambient noise, or worse, which will considerably damage your voice during voice calls.

The microphones of the Bose QC45

The microphones of the Bose QC45 // Source: Frandroid

But what is the difference between the artificial intelligence on which Orosound is working and the environmental noise reduction functions offered natively by the entire industry? Here, the headset continues to learn as it is used. Granted, Orosound trained the noise reduction algorithms for pre-market calls, but the headset itself will continue to learn as it is used with built-in processors. “Until now, all the noise reduction algorithms have been statistical algorithms based on the fact that, when the headphones detect that we are not speaking, the sound is background noise and it is this background noise that will be deleted ”, explains Eric Benhaim. The co-founder of Orosound plans to go further for the future helmets of his brand: “We go to a higher stage, by being really capable of separating in real time between an interesting sound of voices and noises which are annoying”.

AI for noise reduction adapted to the environment

In addition to communication noise reduction, Orosound is also looking at active noise reduction (ANC) which will allow the user to be isolated when he is enjoying his music or his podcasts. The idea here is to allow the headset to analyze its context of use according to movement sensors and the sound environment. An approach that may sound similar to Sony’s with its adaptive sound control on the WH-1000XM4.

The Sony WH-1000XM4

The Sony WH-1000XM4 // Source: Frandroid

The Japanese manufacturer indeed proposes, in its application, to modify the level of noise reduction depending on whether one is seated, moving, in transport or in a specific place. However, it is not artificial intelligence, replies Eric Benhaim, but simple algorithms:

We, with AI models, will be able to know what the surrounding context is, what the type of activity is, and to really make the algorithms adaptive. There are really ANC filters that are going to be generated for your environment and not just ANC filters. presets. Currently, the performance of the ANC is fixed and will vary depending on the user. We really want to avoid this hazard that it could feel depending on the size of the head, whether the helmet squeezes more or less well, the morphology, where we are evolving. The goal is to push this digital filtering as much as possible and avoid instabilities.

AI to adapt the sound to the music

Even before communicating through a microphone or isolating yourself from noise pollution, the very principle of a headset is to enjoy your music, calls or podcasts. And on listening too Orosound has developed artificial intelligence algorithms. The idea this time is to automatically adapt the sound signature according to the type of content listened to by the user, without having to manually modify the equalizer or the presets within its application, regardless of the transducer, DAC or electroacoustic elements used.

Amazfit, Sony, Technics and JBL manual equalizers

Concretely, the headphones will be able to recognize when you listen to podcasts and to put forward the mids, to push the bass if you listen to house or to offer a flat curve if you switch to classical music. The same goes for the sound level of the titles which can be automatically increased by the headphones if you switch from a title encoded with a high volume to another with a lower gain. ” There is no reason today that we modify the sound volume by pressing buttons according to what we are listening to», Deplores Eric Benhaim. Here again, the AI ​​must be embedded directly in the helmet, says the CTO of Orosound: “The entire processing chain must be taken into account to take into account the loss on the digital chain, the electroacoustic elements and with the lowest possible processing latency”.

AI in helmets is coming soon

There is still a sore point, and a sizeable one: when can the general public expect to have access to AI technologies embedded directly in headsets, or even wireless headphones? On this point, Eric Benhaim wants to be rather realistic: “Today, nobody really has AI in helmets”. It must be said that the electronic chips necessary for processing this data in real time are still rare and above all consuming too much energy. Nevertheless, the Orosound project is indeed to propose this type of algorithm as quickly as possible. To do this, the French manufacturer intends to go, as a first step, through updates to its existing helmet, the Orosound Tilde Pro, equipped with three chips. But the bulk of features will arrive in a second step, via a new generation of headphones thanks to the integration of new components:“We already have models which run on a PC, which are integrated into the microphone acquisition chain and which are very good, but we really want to push to embed them in the headset”.

The Orosound Tilde Pro modular helmet

The Orosound Tilde Pro modular helmet // Source: Orosound

The fact is that, if Orosound is developing its own headsets and algorithmic solutions, the French manufacturer relies on suppliers for the chips that equip its devices. These same chips could be adopted by the rest of the industry in order to offer, in turn, artificial intelligence on headsets and headphones. It remains to control costs, especially in a context of global shortage, but also the management of the autonomy of these wireless headsets.


To follow us, we invite you to download our Android and iOS application. You will be able to read our articles, files, and watch our latest YouTube videos.



Source link -102