Microsoft wanted to generate an article using AI in the “Travel” section of its MSN.com site. With the theme of the must-see attractions of the city of Ottawa, it was to take the form of a small tourist guide. Unfortunately, things didn’t go as planned.
While there are already concerns about AI-generated tour guides, Microsoft still wanted to feature such content on its MSN.com platform. Not exactly well received, this mini-guide even caused controversy, since it included the soup kitchen of Ottawa (Ottawa Food Bank) as one of the essential places to visit. The article even advises visiting the place with “ hungry “. A recommendation of average taste, which highlights the limits of artificial intelligence in certain contexts.
AI and its inability to understand contextually
The article, titled ” Heading to Ottawa? Here’s everything you shouldn’t miss! began rather well, by proposing tourist sites and interesting activities to do as part of a trip. However, among these must-see places was the place where the soup kitchen is distributed daily (see photo below). The guide described it as follows:
This organization, since 1984, collects, buys, produces and distributes food to families and individuals in need in the Ottawa area. We see how significant the daily impact of hunger is on men, women and children, and how it can be an obstacle to achieving their life goals. The people who turn to us have jobs, families to support and expenses to pay. Life is hard enough already. Remember to come here on an empty stomach.
A completely strawberry description, with an almost ironic and inappropriate tone, which clearly highlights the limitations of AI when it comes to understanding a particular context.
Integrating AI into content writing: mission impossible?
This example may be laughable, but it’s actually very symptomatic of the magnitude of the challenge of integrating generative AI models into text content writing. They can imitate humans wonderfully, but find themselves quite quickly disarmed when it comes to grasping complex contextual elements, especially when these require a form of sensitivity.
Remember, the “I” in the phrase “AI” doesn’t actually mean “intelligence”. An AI, in the first sense of the term, should theoretically be autonomous, aware and learn from its mistakes. What we call AIs are actually algorithms, admittedly very complex, but which only mimic certain traits of human intelligence. Even if these models have proven their usefulness, they are currently not capable of reasoning. So the professions of journalist and editor seem to be safe… for now?
Source : Ars-Technica