How generative AI arrives, with a thousand precautions, in software packages


“It’s time for everyone to stop talking about AI, and start doing things with it.” It is with this concise formula that Matt Calkins, the founder and CEO of Appian, concluded his presentation at the Appian Europe 2023 event.

Because if 2024 must be the year when generative AI enters companies, there is no question of doing it in a hurry, assures the boss of Appian. “Enterprise customers and businesses must be the winners of this transformation, not big tech.”

A kick from the donkey to the tech giants who have seized on the frenzy around generative AI. Microsoft is holding fast to OpenAI through its cloud infrastructure, Google is injecting AI into its free Google Workspace office suite and Gmail.

How to ensure AI performance and security?

Enough to make them champions of artificial intelligence?

A difficult pill to swallow for BPM (Business Process Management) and RPA (Robotic Process Automation) players, including Appian, with UiPath, Automation Anywhere, Pegasystems and SAP.

“We have been doing AI for over 10 years” claims Matt Calkins on stage. There is therefore no question of “big tech” dislodging these players from this hyper-specialized ecosystem in the management of APIs, RPA and AI for B2B clients looking for performance, but also and above all security.

Towards private AI

Appian therefore plays on the sensitive chord of security and compliance. It must be said that the company’s DNA is based on clients in the defense and financial services sectors.

“With our AI, we promise to take care of critical processes and offer private AI” also says Mat Calkins. Private AI? A concept which seems to borrow from the “private cloud”, as opposed to the public cloud where we do not know where the data is hosted.

“Until now, if you want to do AI in business, you either have to install an open source AI behind your firewall, which requires a high level of expertise, or use a generic AI on a supplier’s site , with real problems with the context of the responses and the quality of the information” notes the CEO of Appian. “Our idea is to send the request at the same time as the data to a platform. This way you will have good results.”

“The data is never used to train the model”

And this goes through several stages. “We first offered OpenAI on our platform,” said Malcolm Ross, Appian’s chief strategy officer. “But that poses challenges because OpenAI has very minimal data security compliance. “.

“So we are in the process of moving from OpenAI to Bedrock, which is an AWS LLM released on September 28. We will integrate Bedrock into the application and the architecture, behind the Appian firewall. We will be able to thus ensuring that the data never leaves our environment. Appian is still in the prototyping phase on this point.

But above all, beyond the quality of the service provider, the development of private AI is also a question of architecture.

With private AI, “the data is never used to train the model,” says Malcolm Ross. We use LLM for semantic understanding of natural language. Then, we do a lot of prompt engineering, to give the context on which we want to have a conversation”.

It is the quality of the prompt that allows us to obtain responses that meet the demands.

As a result of this technical choice, the data is never retained by the AI ​​engine. Enough to guarantee the security of Appian customers, most of whom work in regulated sectors.

“When it comes to data filtering, designers have full control over data, which makes it possible to completely obscure PII (identifiable private information) or identifiable public information from the data,” adds the Appian strategy manager. .

“So even if I want to have a conversation about a German customer, and a little later that customer asserts their right to be forgotten under the GDPR, that conversation has not been preserved in the AI ​​model.

It is therefore the quality of the prompt that makes it possible to obtain responses that meet user requests, since the AI ​​engine is not improved by interactions. “It is prompt engineering that improves the understanding of the LLM” indicates Malcolm Ross.

“Prompt engineering is happening behind the scenes for now”

“Prompt engineering happens behind the scenes for now, the user doesn’t see it. Next year, we’re looking at features that will allow them to build prompts to focus the AI’s attention in a specific area “.

But this requires a high degree of precision. Because the AI ​​must not hallucinate a function name or a syntactic representation. “So we combine this AI with our own custom logic, both prescriptive and predictive, which validates the AI ​​in real time as it returns results. If we detect an hallucination, we make corrections behind the scenes before showing the result to the user,” says Malcolm Ross.

This way of doing generative AI, beyond the question of data security, also makes it possible to limit costs. Each data retraining requires GPU computing time on cloud architectures.

“We wake up with a hangover for the next day, we get the bill”

“The economic aspect is becoming important, notes Malcolm Ross. In 2023, we had the big AI party. And then we wake up with a hangover from the next day, we receive the bill. And so everyone gets together. asks how to create pricing models that are acceptable to our customers.

While waiting for these developments, the company must still give in to the OpenAI trend. The latest version of its platform, which merges data from several sources (CRM, databases, etc.) into a data model, without moving the data, welcomes an AI Copilot module.

“AI Copilot can summarize texts, fill out forms based on the analysis of paper forms, automatically write emails based on platform data,” explains Malcolm Ross.



Source link -97