OpenAI has developed a tool that can detect if a piece of text has been created by ChatGPT, and flag it as AI-generated, predominantly designed to help teachers stop students from cheating on their written assignments.
The tool, which is reportedly 99% accurate, adds a watermark (invisible to human eyes) to ChatGPT text and when that text is run through an AI detector tool, it’s then given a score to show how likely it is to have been generated by AI.
The tool has no planned launch or release date, as OpenAI is grappling with the pros vs the cons of the tool.
They originally built the tool after studies showed that 59% of middle and high school teachers believe that their students have used AI to complete their work. It was designed to give teachers an accurate way of establishing if a piece of text has been written by a student or AI.
But OpenAI is concerned that the watermarks could be removed if the text is run through a translation tool or if the student added and removed emojis. It’s also worried that the tool could stigmatize the use of AI tools for non-native English speakers. But predominantly, its decision to withhold the tool is because 30% of ChatGPT users claimed (in a survey conducted by OpenAI) they would use the tool less, if it had this ‘anti-cheating’ technology embedded into it, which would hurt their bottom line, and potentially damage the reputation of the powerful chatbot.
Anti-cheating technology like this isn’t new: Google is beta-testing SynthID, which is a similar watermarking tool that detects if content has been written by Gemini AI, but nothing currently on the market has such a high accuracy rate, meaning if OpenAI were to release this, it would be the best tool around for detecting cheaters.