Student cheating solution? OpenAI tests ChatGPT “watermark” tool

OpenAI has developed a new tool designed to detect text generated by its own ChatGPT system. The tool uses a method called “text watermarking,” which subtly alters ChatGPT’s word choices to embed an invisible, identifiable mark in the output.

While this technology could help quickly identify students using ChatGPT for assignments, OpenAI is still deciding whether to release the tool publicly. A company spokesperson acknowledged the potential benefits but also highlighted risks. These include the possibility that people could find ways to circumvent the watermark, and that the tool might unfairly impact non-native English speakers.

This effort follows previous, less accurate AI text detection tools from OpenAI, which were shut down last year. The company cautions that the new watermarking method may not be foolproof. It could fail if the ChatGPT-generated text is later translated, rewritten, or processed by another AI model.

Scroll to Top