The arrival of ChatGPT has brought with it a very overt challenge for educators: detection. Yes, many companies have tried to create tools that detect AI; all of these have been quite mixed in terms of success. Now, OpenAI comes up with a game-changing solution.
It has developed a very accurate way to detect text generated by ChatGPT. OpenAI has subtly changed the structure of the text it generates, creating an invisible digital watermark to humans but quite detectable with their tool. The technology reported to have an accuracy of 99.9% could change everything in the battle against AI-driven academic dishonesty.
The potential is there, but OpenAI has been very cautious about the tool’s release. Some of the internal debates revolve around potential user base impact and concerns that this may be disproportionately unfair to non-native English speakers. However, proponents argue that the increased transparency and academic integrity benefits outweigh the risks.
While watermarking technology undoubtedly does come with much promise, it is not infallible. Indeed, it has already been illustrated that resourceful users could obscure or even remove digital signatures. Moreover, the detection tool works best with blanket coverage of use, which further tilts the delicate balance between preserving intellectual property and retaining user trust.
In the fierce battle between AI-generated misinformation and academic dishonesty, this move to release OpenAI’s watermarking tool will be made at a critical moment.