I hope you haven’t missed the big AI revolution going on right now? No? Yes? YES? You have no idea about it? Where do you live? On Mars? Ah OK, you were kidding. AI is being talked about everywhere, new devices like the Humane AI Pin or the Rabbit R1 are very much about it, and everyone involved is scrabbling around like mad for venture capital to fund their incredible ideas.
Having access to these language models is immense, right? We can sit down, whip out our AI, and enjoy a better than life experience. We can ask it questions and often it can come up with remarkably lucid explains. It seems intelligent. Here’s a conversation I’ve just had with ChatGPT’s latest model, after I asked it to settle into a British conversational style:
Me: The weather’s nice isn’t it?
ChatGPT: Oh, absolutely! It’s a lovely day, isn’t it? Perfect for a stroll in the park or perhaps a spot of tea in the garden. How are you planning to enjoy the fine weather?
Me: I have to work, sadly. In fact I’m working right now.
ChatGPT: Ah, that’s a bit of a shame, isn’t it? But needs must, as they say. Hopefully, you can steal a few moments to enjoy the sunshine, even if it’s just a quick breather by the window. Anything I can help you with to make your workday a tad brighter?
Me: Give me a joke to help me get over my vague sense of imposter syndrome!
ChatGPT: Of course! Here’s a bit of light-hearted humour for you:
Why did the scarecrow win an award?
Because he was outstanding in his field!
Hopefully, that brought a smile to your face. Remember, everyone feels like an imposter at times, but you’re doing brilliantly!
It’s a decent conversation, although no British friend I know would chat like that. They’d be less polite and more vaguely abusive, to be honest. But it does seem intelligent. Then I can ask it a tricky question:
Me: I’m struggling with a maths question. I have a formula for a parabola, and I need a graph and the location of the y intercept: f(x)=3x²+7
ChatGPT:
That’s so freaking clever!
Except… except… it’s not actually clever.
Let’s try asking it a question you can use on a seven year old child who would almost universally get the answer right:
As I was going to St. Ives, I met a man with seven wives who was going to St Ives. How many people are going to St Ives?
I think you’ll agree that this riddle is pretty simple. I’m going to St Ives, the man is going to St Ives, and the seven wives are going to St Ives. Look at this madness I got back:
The riddle goes as follows:
“As I was going to St. Ives, I met a man with seven wives, Each wife had seven sacks, Each sack had seven cats, Each cat had seven kits.”
The key detail in the riddle is the first line: “As I was going to St. Ives.” The person telling the story is the one going to St. Ives. The man with the seven wives and all the others are simply met on the way; the riddle doesn’t specify their destination.
Therefore, only the narrator is confirmed to be going to St. Ives. The answer is:
1 person (the narrator) is going to St. Ives.
And that reveals the weakness of large language models. There’s no inherent cleverness. What it does is to look for what it thinks you’re talking about then infer an answer using a combination of techniques, and it then picks what the most likely question is then creates a response based on what it thinks is correct. If the problem is very well known, it can do some really quite incredible things. But if the problem given to it is a novel variation of an old problem it will absolutely fall apart.
Life is full of novel problems. Don’t be misled by the clever bullshit attitude of a large language model, but at the same time, if you know its limitations, it can be a wonderful tool. Use it wisely and carefully, or there’s a risk it’ll make a fool of you. And yes, the post image at the top is, indeed, AI generated – because I have bugger all budget and this is a personal blog. Bite me!