When Google is interested in our movements and our body language to make objects smarter


What if what our everyday objects lacked to be more intelligent consisted of a radar capable of analyzing our movements and our gestures? This is precisely the purpose of the work of a dedicated team at Google.

Big Brother’s Orwellian eye is not far away. Researchers from Google’s ATAP team (for Advanced Technology and Products) present the progress of their work on the Soli sensor. This sensor is able to interpret the gestures of the user, integrated into the Pixel 4 in 2015, which Google imagines deploying on a larger scale in everyday connected devices, in order to adapt their operation to the habits of each.

The idea, more concretely, is to have a kind of radar at home analyzing the movements and gestures of individuals, to suspend the television program automatically when you get up from the sofa or block notifications when you get up. offers a range of rest. Combined with intelligent learning algorithms, this technology could thus allow our everyday objects to better understand our habits in order to adapt to them and become more considerate. It’s very promising, but it’s also a little scary. Reassuringly, the system on which the Google teams are working does not use any cameras and the ATAP team is formal: radar is the least intrusive technology available for collecting spatialization data.

Smarter connected objects

For Leonardo Giusti, in charge of design at ATAP, it is perfectly logical to think of such developments. “Technology is becoming more and more present in our lives. It is normal to start wondering how technology can take inspiration from our behaviors to adapt to us”, he says. A thermostat that changes the room temperature on its own when no one is home, a computer screen that lights up when you sit in front of it or a television that lowers the volume when you fall asleep, it is indeed common sense. That’s why his team focused their research on “proxemics,” which studies interactions based on distance and space. The goal is to create scenarios in which humans and objects generate interactions in a given space.

Some of these interactions already exist. Some televisions and monitors use presence sensors. Google Nest smart displays use sound waves to measure your distance from the device. The second-gen Nest Hub can even detect motion and track sleep. But Google wants to go further. His work focuses, for example, on gestures and attitude — to better identify each member of the household — or body orientation — to know in which direction the gaze is directed.

More humans in interactions with machines

For Google, it is by analyzing this body language that objects will be able to know if we are actually about to interact with them. In order to train their models, the researchers recorded with aerial cameras the ballet of their comings and goings at home. So that the system is able to differentiate between approaching a device to use it or simply passing by. One way, according to Lauren Bedal, interaction specialist at ATAP, of “pushing the boundaries of what we perceive to be possible for human-computer interaction [en] putting computers in the background to only be interested in them at the opportune moments”.

If some will see it as a way to add human in the operation of technology and will show interest in these smart objects capable of adapting to the habits of their users, others will undoubtedly cause a blockage on the ascendancy of technology on our decisions. What if, once in a while, we deliberately left the television on in the kitchen even when we’re in another room, just to know when our show starts?

Another fear is seeing a personal data giant like Google, whose exploitation of this information is one of its core businesses, venturing into ever more personal territory, after having already tried to install its assistant — and its microphones — in our homes. A point on which Google, like other major technology groups, will have to be more and more transparent and respectful.



Source link -98