When we ask Douglas Eck about advances in language management using artificial intelligence, he suggests pressing the “subtitles” button on Meet, the videoconferencing service used for the interview, because of the Covid-19 epidemic. The words of this American engineer who came to Paris to work at the French headquarters of Google are then displayed in writing, live and without error, under the window where we see him, helmet on his head. This innovation, unthinkable until recently, is also available on most videos from YouTube, the Google subsidiary. Or on the dictaphone of its latest phones, which offers to automatically transcribe all audio recordings.
These new possibilities are just one example of the progress made in recent years in the “Natural language processing” by digital companies, and in particular giants like Google, Apple, Facebook or Amazon (GAFA). Some of these novelties are already being put into practice. Others are in the research state, exhibited at annual developer conferences, such as Google I / O (which took place May 18-20) or Facebook F8 (June 2).
In the crucial area of translation, services like Google Translate, which has extended its offer to 104 languages, or German DeepL now allow entire paragraphs to be translated in a consistent and fluid manner. Thanks to these advances, Google offers to translate the subtitles of YouTube videos.
20 billion translations per day on Facebook
Facebook has also made great progress. Artificial intelligence generates 20 billion translations per day on the social network (several dozen languages, including Wolof, are available), compared to only 6 billion in 2019 … “This area is very important for Facebook. And we know that simultaneous translations in real time will be possible ”, explains the Frenchman Yann LeCun, scientific director of Facebook and pioneer of artificial intelligence.
The dream of a machine translating live conversations would be attainable. Google Translate is approaching it slightly delayed: you can speak in a language and have your interlocutor hear or have the translation read through a smartphone, and even listen to your translated response in headphones, if they are the last ones. home models.
The barriers between text and image are blurred. With the Google Lens augmented reality app, students can use their smartphones to scan a manual page or a handwritten sentence and translate it or get more information online. A tourist can understand a sign or a menu, find out about a monument …
You have 73.14% of this article left to read. The rest is for subscribers only.