Why OpenAI Won’t Let You Spot AI Text

Photo by Max Chen @maxchen2k on Unsplash

OpenAI, creators of ChatGPT, started as a nonprofit with the goal of ensuring artificial intelligence (AI) would benefit humanity.

They dropped the nonprofit part, but they remain focused on developing and promoting AI to ensure safety and widespread benefits.

So, it comes as a surprise to learn that they’ve been sitting on a way of watermarking ChatGPT-generated text for a year. A way of stopping students from copy-pasting essays has existed for a year and OpenAI haven’t made this available, where is the benefit for humanity in keeping this from users?

Wait, What

OpenAI could have made ChatGPT better humanity. Adding watermarks to AI-text would mean we would be able to get rid of those AI-generated Medium posts (before you ask, this isn’t one of them), X comments and student essays.

So, what possible reason have they given for not making this available technology available?
As ever with tech companies, this has a profit motive.

Consequences of “Do No Harm”

The word on the street is that Open AI have debated internally about their (super accurate by the way) watermarking technology. It’s not that it would reduce the quality of the output (it wouldn’t), it’s that it would turn off users.

Yes, that’s right. If text were watermarked users have said they’ll be less likely to use ChatGPT, so OpenAI pulled the plug.

It’s like they’re saying the obvious: “Our ethics matter, but not as much as our profit”.

I kind of wish companies were honest. Like in the old days when Nike used sweatshops and Nestlé promoted formula over breastmilk, an honest pursuit of profit at all costs.

Competition Will Get Us There

Google has already implemented watermarking during this year’s Google I/O, so OpenAI will probably follow suit soon. OpenAI say they are considering the use of metadata and are already employing it for AI images.

Hopefully, we’ll stop those kids copy-pasting a paper on the supreme court, but it would be better to speed the process up, wouldn’t it? For the good of humanity?

Conclusion

In a world where AI advances faster than most of us can keep up, it’s ironic that OpenAI can’t be bothered to implement the safeguards that would demonstrate the ethics they profess to uphold.

I don’t feel surprised. This is Big Tech after all, and this is just another company that will probably lay off all their software engineers at the drop of a hat.

Previous
Previous

If Error Messages Were Honest🙀

Next
Next

Top Software Developer Trends in 2024