Revealed by the MIT Technology Reviewan embarrassing case involves the company iRobot, behind the robot vacuum cleaner Roomba.
You will no longer look at your little robot vacuum with the same eye. Spotted by our colleagues from BFMTV, a MIT Technology Review survey unveiled in mid-December reveals that devices in 2020 took images of their owners and their homes, which were later published in private groups on social networks. And this, while some photos presented by the American media show, for example, a child lying in a hallway, whose face is identified, or a woman sitting in the toilet, her shorts down.
The devices in question, from the company iRobot – manufacturer, in particular, of the famous Roomba – were test devices of the company, provided to paid people who had given their agreement, in writing, so that the data flows – including photos and videos – picked up by the robots are sent back to the company. Objective: to improve the tool, using the data collected to teach it new things. It was about “special development robots with hardware and software modifications that are not and have never been present on iRobot consumer products for purchase“, specified iRobot, quoted by the MIT Technology Review. In other words, people were consenting to have their daily life and interior photographed and videotaped by the robot, the company believes.
The collected images were then transmitted to companies to analyze and exploit the images. One of them, Scale AI, relied on foreign subcontractors, explains the investigation, who had to label the various elements spotted in the images – such as chairs, doors or people, for example – in order to improve the tool and teach it to spot these elements. But these workers, in this case Venezuelans, shared the images on social networks, including Facebook and Discord, in private groups created to discuss their work with each other. A practice strongly criticized by Scale AI, as by iRobot, and which ended up spreading the word.
The company wants to reassure its consumers
The fifteen images presented in the MIT Technology Review article “have been disclosed externally by an image annotation service provider“, which no longer works with iRobot, said in a LinkedIn post the boss of the company, Colin Angle. The businessman also regrets that the media published the photos – hiding the faces visible on them – in his article, and underlines that “the robots used to collect this data are specially equipped development robots, i.e. they are different from a consumer version of Roomba or Braava. Development robots are modified with software and hardware that is not present on the production robots that consumers buy“.
Acquired last August by the American giant Amazon for the tidy sum of 1.7 billion dollars – an operation which must still be validated by the American competition authority -, iRobot therefore hopes to reassure the current holders and potential buyers of his products. It remains to be seen whether they will be convinced, as this case once again underlines the importance of the protection of personal data.
SEE ALSO – Artificial Intelligence: What is ChatGPT?