Google opens its experimental chatbot to public testing


Image: Google

Google opened up its AI Test Kitchen mobile app to give everyone limited hands-on experience with its latest AI advancements, like its conversational model LaMDA (Language Model for Dialogue Applications).

Google announced AI Test Kitchen in May, alongside the second version of LaMDA, and is now allowing the public to test parts of what they believe will be the future of human-computer interaction.

AI Test Kitchen is “intended to give you an idea of ​​what it might be like to have LaMDA in your hands,” Google CEO Sunday Pichai said at the time.

Initially, it will be accessible to small groups in the United States. The Android app is available now, while the iOS app is expected “in the coming weeks.”

Warning: inappropriate content may filter

When registering, the user must agree to certain conditions, including “I will not include personal information about myself or others in my interactions with these demos”.

Like Meta, which recently publicly previewed its AI chatbot model, BlenderBot 3, Google is also warning that its early versions of LaMDA “may display inaccurate or inappropriate content.” Meta warned when opening BlenderBot 3 that the chatbot might “forget” it’s a robot and “say things we’re not proud of”.

Both companies acknowledge that their AI can occasionally be seen as politically incorrect, as Microsoft’s chatbot Tay did in 2016 after the public fed it with his nasty comments. And like Meta, Google says LaMDA has undergone “key security enhancements” to prevent it from giving inaccurate and offensive answers.

But unlike Meta, Google seems to take a more restrictive approach, setting limits on how the public can communicate with it. So far, Google has only exposed LaMDA to Googlers. Opening it to the public could allow Google to accelerate the pace of improving the quality of responses.

Dialog simulation

Google releases AI Test Kitchen as a set of demos. The first, “Imagine it”, allows you to name a place, after which the AI ​​offers you paths to “explore your imagination”.

The second demo, “List it”, lets you “share a goal or topic” which LaMDA then attempts to break down into a list of useful sub-tasks.

The third demo, “Talk about it (Dogs edition)”, seems to be the freest test, although limited to canine questions: “You can have a fun and open conversation about dogs, and only about dogs, which explores LaMDA’s ability to stay on topic even if you try to deviate from it,” Google explains.

LaMDA and BlenderBot 3 seek the best performance in language models that simulate dialogue between a computer and humans.

LaMDA is a large 137 billion parameter language model, while Meta’s BlenderBot 3 is a “175 billion parameter dialog model capable of conversing in an open domain with internet access and long memory”. term “.

Google’s internal testing focused on improving AI security. Google says it ran cross-testing to find new flaws in the model and recruited a “red team,” made up of attack experts, who “uncovered additional harmful, but subtle results,” according to Tris Warkentin of Google Research and Josh Woodward of Labs at Google.

Double-edged public exposure

While Google wants to ensure safety and prevent its AI from saying shameful things, Google can also benefit from putting it in the wild to experience human speech that it cannot predict. . Quite a dilemma.

Google points out several limitations Microsoft’s Tay suffered when exposed to the public. “The model may misunderstand the intent behind identity terms and sometimes fail to produce a response when used because it struggles to differentiate between benign and adverse prompts. It can also produce harmful or toxic responses based on biases in its training data, generating responses that stereotype and misrepresent people based on their gender or cultural background. These and other areas are being actively researched,” say Tris Warkentin and Josh Woodward.

Google says the protections it’s added so far have made its AI safer, but haven’t eliminated the risks. Protections include filtering out words or phrases that violate its rules, which “prohibit users from knowingly generating content that is sexually explicit, hateful or offensive, violent, dangerous or illegal, or that discloses personal information. »

On the other hand, users should not expect Google to delete everything said when making the LaMDA demo.

“I will be able to delete my data while using a particular demo, but once I leave the demo, my data will be stored in such a way that Google will not be able to tell who provided it and will no longer be able to respond to any deletion request,” reads the consent form.

Source: ZDNet.com





Source link -97