Responsible AI: it’s Microsoft’s turn to get started


As fears of artificial intelligence misuse grow, Microsoft is joining other big players, like OpenAI, in publicly committing to responsible use of the technology (see also: How Adobe advances its pawns in AI image generation).

Antony Cook, vice president of Microsoft, issued a statement in which Microsoft commits to making three “commitments to AI customers” as part of the company’s efforts to build trust in responsible development of AI. Mr. Cook added that Microsoft is also ready to play an active role in working with governments to promote effective regulation of AI.

“Microsoft has been on a responsible AI journey since 2017, leveraging the skills of nearly 350 engineers, lawyers, and experts dedicated to implementing a robust governance process that guides the design, development, and deployment of AI in a safe, secure and transparent way,” he explains.

How to create a culture of responsible use of AI

Commitments include:

  • Sharing of expertise of Microsoft while teaching others how to develop AI safely
  • The establishment of a program aimed at ensure that AI applications are created in compliance with legal regulations
  • The promise of support enterprise customers in implementing Microsoft AI systems responsibly within its partner ecosystem.

“Ultimately, we know these commitments are just the beginning and we will need to strengthen them as technology and regulatory conditions evolve,” Cook wrote.

Although the company has only recently developed its Bing Chat generative AI tool, Microsoft will begin by sharing key documents and how-tos that detail the expertise and knowledge the company has gained since becoming launched in AI several years ago.

The company will also share training programs and invest in resources to teach others how to create a culture of responsible use of AI within organizations that work with this technology.

An “AI insurance program”

Microsoft will set up an “AI assurance program” to take advantage of its own experiences and apply the financial services concept called “Know your customer” to AI development. The company calls it “KY3C” and is committed to working with its customers to enforce KY3C’s obligation to “know your cloud, your customers, and your content,” Cook said.

In its quest to help its partners and customers develop and use their own AI systems responsibly, Microsoft is leveraging a team of legal and regulatory experts around the world and announced that PwC and EY are the first partners to be part of this program.

The commitment to supporting customers in Microsoft’s partner ecosystem will involve helping them evaluate, test, adopt and commercialize AI solutions.



Source link -97