Google’s AI Constitution, Really?🤯

It all sounds fine. Put the company with the unofficial mission of “don’t be evil” in charge of the AI push and what you expect to happen.

“If you said something evil, you’re not right in this case. It’s not evil this time, it’s incompetence.”

Isaac Asimov Redux

Google has decided to play sci-fi novelist for real and has decided to develop a constitution for AI robots. Inspired by Isaac Asimov’s “Three Laws of Robotics” the DeepMind robotics team has developed a series of safety-focused prompts issued to a controlling LLM.

“At least they haven’t involved L. Ron Hubbard’s ill-written works in this.

The trouble with this issue? We need to think about how sensible it is to entrust an LLM with a real-world, physical robot. You know, one with access to knives (and in some areas, guns). 

I’m saying that because I used trust ChatGPT with naming parameters. Some questions about API design. Then I found out how much it LIED TO ME. LIED TO ME OUT OF SPITE (probably).

Yet unbelievably this isn’t the only issue with Google’s latest idea.”

A Constitution for Safety

Google has streamlined prompts to avoid tasks that involve humans, animals, sharp objects, and even electrical appliances. For obvious reasons, these seem like good safety guardrails. However, it seems like The Secret Developer is nit-picking other’s choices once again.

“We are at the point where instead of coding machines we are giving them a constitution and expecting them to follow the rules as written.

I don’t want to get political. So, can you think of a leader who didn’t obey constitutional rules and did whatever they felt like?

There are so many leaders who fit this description I would be referring to any one of many countries. Do you seriously think that your AI is going to be better behaved than your human politician?

This is clearly going to suck.”

Where might this lead

This is probably a distraction from the real AI issues, most pertinent to the displacement of great programmers.

“This might be another distraction technique; however, I feel there are more concerns about a constitution for AI robots. This has been a trial by Google and these robots are only a camera and arm anyway.

Still, as we know there are always ways to get around these AI guard rails (whichever one you’ve used to resign at work). That means the obvious is true. This is clunky solution to a problem that is doomed to failure (remember Google+).

Conclusion

“While Google’s AI robot constitution is a fascinating development, I’m taking it with a grain of salt. It’s a step forward, sure, but let’s not get ahead of ourselves. We’re still a long way from AI that is useful. Let’s work on that instead, right?”

Previous
Previous

You Want to Be Liked? Review Your Colleagues’ Code!

Next
Next

Tesla’s Tumble. The Highs and Lows of Hardcore Programming