This anti-ChatGPT tool wants to put an end to cheating in exams, students will hate it


As the popularity of ChatGPT grows day by day, students all over the world are not hesitating to use AI to finish their assignments in just a few minutes. Fortunately, teachers now have a parade.

Credit: albund / 123RF

Since its launch late last year, Open AI’s new GPT-3 conversational artificial intelligence has been a plague for schools. Thanks to ChatGPT, students are now able to finish their homework or write texts in just a few minuteswhich obviously does not please the teachers.

Several schools then came to ban AI, but many experts quickly pointed out the uselessness of this measure. Due to the lack of a legal framework, it is currently impossible to consider the use of ChatGPT as cheating, which forces teachers to grade students normally. If it was difficult until now to tell the difference between work written by students and AI, teachers should soon be able to detect them more easily.

ChatGPT no longer goes unnoticed thanks to this tool

A team of Stanford researchers has proposed a new method called DetectGPT. It is basically a barometer to determine if a text is machine generated, without having to train an AI or collect large data sets to compare text. It is also not the first tool of its kind, since a student had already launched GPTZero, which could also detect texts written by GPT-3.

DetectGPT would detect samples of pretrained language models” using the local curvature of the logarithmic probability function of the model “. For simplicity, the tool can recognize patterns that may indicate an AI-generated piece of textbased on what is presented.

This should make it easier for teachers to know which assignments have been written using AI. Stanford claimed the tool was 95% reliable, which is still quite high for a first version. The team behind DetectGPT has yet to reveal much more information about the tool and it is not publicly available at this time. We hope that it will soon be made available to everyone.

Source: Stanford



Source link -101