Call It What It is. ChatGPT is BS

Photo by Marco Ghirello @ghire on Unsplash

What you say is incredibly important, perhaps even more important than what you mean.

Our software developers have developed a habit for saying “I’m completing ticket TS9292 in parallel with ticket TS8373”. In parallel? Really? You’re working on two branches simultaneously? Or do you actually mean concurrently, which would at least be physically possible? Personally, I find precision important in software engineering. 

This is why a paper “ChatGPT is bullshit” is of interest, as hallucinations don’t make sense as an expression. 

The Philosophical Take

Thank you to the authors of the paper, Joe Slater, James Humphries, and Michael Townsen Hicks for calling it as it is. AI inaccuracies are not hallucinations

AI is generating plausible-sounding text (or code) without any gaps of its veracity. It’s simply the ultimate office blowhard who doesn’t give a damn about the truth.

The Legal Brief Fiasco

Remember the New York lawyers who were sanctioned for using fake ChatGPT cases in a legal brief?

It wasn’t a glitch on ChatGPT’s part but was simply ChatGPT doing what it does best. Stringing words together like they make sense even when they do not.

Generative AI simply predicts the next word in a sequence based on datasets and has no understanding of the information it generates. The model consumes billions of pages of text and then spews them out according to context. Truth has no validity here, accuracy means nothing.

Coding Implications

When coders use code from ChatGPT they risk hallucinations creeping into their work, and we should think of those as lies. When it’s code this is critical, but only if we unquestioningly “believe” those lies. The copy and paste coder are being caught pushing problematic code as they actually are unable to see errors in the code they are “working” on.

The danger of this is that we are absolving the users of the system from blame when something goes wrong. The AI “hallucinations” divert attention from humans who design, implement and oversee the systems in question. 

Remember that Google employee who thought their chat was sentient? It’s a great example as to why we need to keep our expectations of AI grounded in reality and not buy into the hype.

Conclusion

Please, next time you hear a colleague say that an AI is hallucinating tell them that they’re enabling the avoidance of responsibility.

That or scream “Bullshit” at them and explain they should use the right words for what they mean.

Either way, they’ll get the message.

Previous
Previous

Coding in 2024 is a Career Misstep

Next
Next

Reject My Software Development Job Application, Please!