OpenAI has developed a tool that could potentially expose cheating students by detecting AI-generated text, but the company is hesitant to release it publicly. The text watermarking tool, which boasts a high degree of accuracy, is capable of identifying essays or other content produced using ChatGPT.
OpenAI acknowledged the development of this tool in a May blog post, but internal debates and concerns about its impact have kept it on the backburner. While the tool has proven effective against localized tampering like paraphrasing, OpenAI admits it’s less resilient against more sophisticated manipulation techniques.
Another reason for the delay is the potential negative effect on specific groups, particularly non-native English speakers who might rely on AI for language assistance. OpenAI is carefully weighing the risks and benefits of releasing the tool, as it could have far-reaching implications beyond their own ecosystem.
For now, the company is focusing on other solutions like content classifiers and metadata, and prioritizing the release of authentication tools for audiovisual content.
This news comes amidst a flurry of activity in the AI space, with OpenAI recently announcing GPT-4o Long Output and Google slashing the price of its Gemini 1.5 Flash model, sparking a potential price war in the AI market.
