Scientists say ChatGPT ‘lacks depth and insight’


Much ink has been spilled lately about the promises of OpenAI’s ChatGPT program to generate natural language utterances and answer questions posed by humans.

However, many articles have also insisted on the fact that ChatGPT produces a lot of false information. The journal Nature reported that the program did not meet the criteria for research paper requirements “because it cannot take responsibility for the content and integrity of scientific papers.”

Some researchers find another, more subtle limitation of ChatGPT: its results can tend to be mundane, overly general, and uninformative, even when the program isn’t inaccurate.

That’s what authors Barbara Treutlein and Patrick Cahan discovered when they used ChatGPT to help them write content for the science journal Stem Cell Reports, a publication of science publishing giant Cell Press. The article was posted on January 10.

When the answers lack depth

The editors have posed various basic questions about the field of computational biology to ChatGPT. The objective was to demonstrate “how advances in computing can help researchers (…) by saving time and thus allowing us to do more research!” »

The authors have reproduced, in edited form, their prompts to ChatGPT and the program’s responses. Here is one of the sample questions and answers:

  • Setpoint : What are the main challenges in using computational techniques to answer questions in stem cell biology?
  • The answer by ChatGPT : “The need for large amounts of data: To accurately model the behavior of stem cells, researchers need large amounts of data on the behavior of individual cells and the interactions between different cell types. This can be difficult to collect and process, especially when dealing with large numbers of cells. »

Although Barbara Treutlein and Patrick Cahan observe that ChatGPT has been “helpful in articulating how computational systems biology will help stem cell research and in identifying some obstacles”, they note that “overall, many of the responses lacked depth and insight. »

Synthesis capacity

To dig deeper into the matter, ZDNet sent an email to editors asking them to explain how to distinguish meaningful responses.

Patrick Cahan responded by email with an annotated collection of ChatGPT prompts and responses which, he writes, demonstrate the program’s “glaring lack of depth and insight”.

Despite these shortcomings, Patrick Cahan told ZDNet, “In general, I’m very impressed with how powerful this tool is. Patrick Cahan teaches a course in computational stem cell biology and said he tested ChatGPT’s ability to write code to address some of its issues.

“He does it well (not perfectly), and he documents the code! wrote Patrick Cahan. “I also think he has an impressive ability to synthesize and summarize information, so I’m looking forward to seeing what will come of his customization for biomedical literature.” »

Source: ZDNet.com





Source link -97