Yves Caseau, the CIO of Michelin, dissects ChatGPT


For regular LinkedIn users, coming across a post devoted to the ChatGPT phenomenon is trivial. OpenAI, its creator, can rely on curiosity to test and collect valuable data.

Beyond the buzz and hype, in-depth publications devoted to artificial intelligence are rare. That of Yves Caseau is one of them. Michelin’s CIO and chief digital officer has been interested in ChatGPT for a month.

The LLM, “simple and naive”, but effective

This period of experimentation allows him to formulate five conclusions on artificial intelligence, its techniques and practices. Thus, based on its “large language model”, the ChatGPT system supplants more sophisticated and semantic algorithms.

Its principle consists of a “prediction of the next probable word”, underlines the DSI. “Remarkable” for producing summaries, OpenAI’s algorithm “is all the more impressive for being a simple and naive algorithm”.

ChatGPT and its performance allow a second observation, believes Yves Caseau: the importance of the size of the training set, which takes precedence over the complexity of the algorithm itself.

“Simple algorithms trained on huge volumes of data do better than sophisticated algorithms with smaller learning corpora,” he recalls, as the models in translation already indicated.

Promising Hybrid AI

The Michelin executive also sees ChatGPT as an illustration of the power of hybrid AIs. Thus, the OpenAI system derives its power from the combination of LLM and reinforcement learning.

“We are only at the beginning, we will find more complex hybridizations”, he foresees. However, ChatGPT also has weaknesses, such as its tendency to invent, for lack of having the necessary information.

“It’s perfect for a creative use (inventing a text / poem) but dangerous for a question or a synthesis, precisely because chatGPT produces fakes that look like the real thing”, therefore considers the CDO.

This limit has a direct implication for professional uses of this technology. As long as they are not creative, which excludes fakes, the user must be “competent” to use ChatGPT as a “cognitive assistant”.

ChatGPT has its limits

Thus, generating code via the tool requires the user to master software development. In addition, to reduce the error rate of AI, it is better to use it to deal with large topics.

Why ? Because on “broad” questions, “the learning corpus is necessarily relevant”, explains Yves Caseau. On the other hand, on “narrow/precise questions, he is quickly mistaken”.

This conclusion is also a recommendation for teachers wishing to prevent “GPT cheating” among their students. Indeed, “questions that require an original analysis make it possible to put it out of play”.





Source link -97