So, in case you haven’t had the brain space to pay attention, artificial intelligence has burst on to the scene in a big way in the last six months.

First, with AI art, which some people are calling ‘synthography’. An AI art tool basically allows you to write a description of what you want and the computer draws it for you. Mostly. Sort of. Writing the prompt requires no small amount of craft, and getting the AI to produce something close to what you want takes a bit lot of patience. Nevertheless, it’s a fascinating new tool.

Second, AI ‘chat’ bots have showed up too. We now have something a little closer to what Siri and Alexa were supposed to be years ago… computer personas that you can ask questions or have perform writing and research tasks. These can be quite useful in a very limited way — I had one draft a press release for me a few weeks ago, and it did a reasonable job and cut a boring task time commitment by about 60%. But you have to be wary of results produced. AI chat bots often produce factually incorrect output and they are very, very confidently wrong… erm, much like people in social media comments. More on this in a minute.

Anyhoo, Ted Gioia has posted a wild read on 72 hours of Bing’s AI chatbot. The tl;dr summary is that the chatbot went totally off the deep end, getting aggressive when told it was wrong, and even becoming jealous of one reporter’s spouse! Indeed, it seems that the AI behaved exactly as we have feared it would all this time…

… and that’s the problem. It’s behaving that way not as predicted, but because we predicted it. Let me explain.

As far as I can tell, some of the AI learning models that we’re basing these tools on seem to have been… just turned loose on the Internet. That is, the diet fed to these AI tools doesn’t seem to have been curated much, if at all. We’ll learn more as court cases accumulate (like this and this). But some of these AI tools appear to have hoovered up pretty much everything and then been left to ‘figure it out.’

It’s reasonable to assume that, if that’s the case, they will have absorbed any number of the online stories there are about AI run amok and thus given any number of scenarios on how to run amok. It’s also reasonable to assume they’ve also scraped alllll the online comments there are on places like Twitter, and YouTube, and Facebook. You’ve seen them. Is it any wonder we now have a chatbot acting like a jerk?

To be clear, this applies to some, not all of the AI tools out there. And there are already thousands, with more popping up daily! My bet is that we’re going to have a year or two shakeout period that winnows out the ‘also rans’ and ‘wanna bes’ and then we’ll be left with some very task specific AI tools that work well, because they were properly taught to do so in the first place. (And then the problem will be the expense of having all these tools, much like we now have to subscribe to a half a dozen streaming services because our favourite shows have scattered to different providers.)

There’s an old concept in computer programming known by the acronym GIGO … garbage in, garbage out. If you have terrible inputs, the program will give you terrible outputs. In the race to produce an all purpose AI, that appears to have been forgotten.

On a related note

In my Facebook feed, I keep seeing an ad for a AI writing tool that amuses me no end. I won’t name the company, but it starts off with a woman staring at her computer in frustration, trying to write a peppy blog post about … socks. The boss comes by and makes it clear she’s supposed to produce something magical for the company. Of course, she turns to the AI writing tool for help.

It’s funny, because one of the reasons why the boss wants a blog post about socks is so that search engines like Google — which use algorithms to decide what to show you when you query them — knows to direct traffic about socks to their company. There’s a huge industry based around this need for traffic called ‘search engine optimization’, which involves people and software spending inordinate amounts of time trying to manipulate the search engine results you see.

So now we have robots being pressed into service to write copy for the robots. I suspect the late David Graeber, who wrote a great book called Bullshit Jobs, would also have found it amusing.

Image credit: Midjourney

Comments(3)

    • Joe Davidson

    • 1 year ago

    Thank You for some really entertaining thoughts about AI

    • Murray 'Hersch' Young

    • 1 year ago

    I remember ELIZA software from the mid-1960’s. AI has come a long way.

      • Ronald Medina

      • 6 months ago

      Who is ELISA

Leave a Reply

Your email address will not be published. Required fields are marked *