For about a year now, OpenAI has had a system that allows it to integrate invisible watermarks into the texts generated by its ChatGPT.
To rat out students, or not££££
This is what the Wall Street Journal reports, specifying that the company naturally has the tools to spot said watermarks, and therefore detect the smart guys who use ChatGPT for perfidious purposes, such as having their homework written by a machine rather than by their own work.
So, when will these tools be made available to the public? Well, OpenAI is apparently divided on this subject. This seems to be the fairest approach, but on the other hand, a certain part of the public may be disappointed. Indeed, according to a survey conducted by the company among ChatGPT users, nearly 30% of them say they would stop using the software if the watermarking system were integrated.
OpenAI nevertheless continues to explore ways to distribute these tools, without alienating part of its audience.