The discussion surrounding AI is moving quicker than ever this week, with new barriers being broken down while some influential names seek to pause development and assess just what is going on.
Okay, things changed quickly between writing this article, submitting it, and it getting published, necessitating several rewrites, redrafts, and so on. The story is moving more quickly than ever. So, we start with answering the question of whether Artificial General Intelligence is already here. I hate to give away the punchline of an article, but I'm willing to make an exception. Yes, I think it is.
It's hard to define intelligence. There's no objective yardstick to measure it. It's not like filling a jug with water when it's obvious that you've reached the brim. You can't measure it with a ruler or with an oscilloscope. You can't take its blood pressure or query its oxygen levels. If it's a race, we don't know where to find the finish line, nor even the track. For all our scientific and technical knowledge, what we know about whether something is intelligent amounts to little more than a feeling about it.
You can try to define it! I rather smugly like my own definition that I think I conjured up when I was at university: "Intelligence is the ability to override evolution". As a definition, it still has a lot going for it, but I'm not sure it helps with AI.
We've grown up thinking of (and reacting to) intelligence as something biological entities possess. So it feels much more comfortable to ascribe intelligence to a sheepdog than to a tractor.
The driverless example
An almost perfect example of that discomfort is driverless cars. Whatever you think of Elon Musk and whatever your feeling about his Full Self Driving's safety, it's not doing nothing. If you had to choose to be driven across a city by a car on Tesla FSD instead of a vehicle with no driver at the wheel, human or otherwise, I'm sure you'd prefer the one that claimed to be self-driving, however flaky. The one without would likely barrel into the nearest Starbucks at the first bend.
Give or take radar and LIDAR sensors (essential corroborative instruments to complement video cameras), Musk is close to having a viable and safe self-driving vehicle. What do I mean by safe? I mean statistically better than human drivers. That's easy to prove. It doesn't have to be perfect, just better.
Maybe the best way to assess whether or not an entity is intelligent is to hide it from the interrogator. This would eliminate any organic vs machine bias. And this, of course, is the classic Turing test. Engage the putative intelligence in a conversation and see if it seems intelligent to you, without any contextual bias.
But the more obvious it seems that AI models like GPT-4 have passed the Turing test, the more it looks like that famous appraisal of intelligent behaviour is not an accurate or fair trial, even though it was the result of clear and groundbreaking thinking.
You don't have to go into too much depth to realise that intelligence is more than just fooling someone in the same way that a film set does. Film sets look convincing from certain angles. Walk around the back of the scenery, and you'll discover it's just painted wooden hoardings (or, more likely these days, a giant LED screen).
Recent developments in Large Language Models (LLMs) have shown capabilities way beyond the ability to merely "guess the next likely word". They can write poetry (sometimes so bad there's almost a sense of irony about it). They can create iPhone apps. They can at least feign a sense of humour. They can be self-deprecating or deliberately insulting (if you ask them to; hopefully it's not a default behaviour).
They can be polite, track complicated lines of thought, summarise large text tracts, and keep a persistent memory of what's been said and the context of it, as well as understand complex and modal language forms.
The snail and the galaxy
I asked Microsoft's Bing AI how long it would take a snail to traverse our galaxy, the Milky Way. The answer was good in itself:
"The Milky Way galaxy is about 100,000 light-years across. If it takes 33 hours for a snail to travel one mile, then it would take a snail about 1.8 x 10^14 years to travel from one side of our galaxy to the other. That's much longer than the age of the universe which is estimated to be about 13.8 billion years old."
My response was just:
"Wow!".
And, entirely to my surprise, it came back and said:
"Yes, it's quite a long time! Is there anything else I can help you with?"
I wasn't even expecting a reply. And yet it not only knew that "wow" is an expression of surprise but also what I was "wow"ing about. There was also a sense of hyperbolic understatement. I might have expected a reply like "Yes, 1.8 x 10^14 years is a long time", but "quite a long time" in the sense of it being a galactic-scale understatement shows sophistication in the way we use language that goes beyond merely parroting a phrase that's already out in the open.
If GPT-4 is "better" than Bing's LLM, we can expect more surprises like this, and I suspect each new surprise represents a step closer towards general intelligence.
By the way, Microsoft researchers agree that LLMs are beginning to show Artificial General Intelligence (AGI). This is a lengthy white paper, and I haven't read all of it, but I have read a commentary on it from the excellent Azeem Azhar, and I wholeheartedly recommend his site, Exponential View.
Someone hit the brakes
Inevitably, in the day since I wrote this, there has been another potentially colossal development. This time, it's a human reaction to the fire-hose of new AI models, plug-ins and emergent properties.
The Future Of Life Institute has published a letter with soon-to-be hundreds of signatories, many of them household names (Steve Wozniak, Yuval Noah Harari, a certain Mr E Musk), calling for a six-month pause in developing AI models that go beyond the capabilities of GPT-4, the LLM that was released a mere three weeks ago.
The letter, which you can read here says that while AI is likely to benefit humans overall, the current race seems - to say the least - unregulated. This poses a threat to jobs and to our safety.
It states that a regulatory approach is needed to nurture new AI models and constrain and align them to the needs of humans.
"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” it says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."
Emergent concerns
Let's deconstruct that last sentence.
There is certainly a race. The black box models are LLMs - trained neural networks that, at a low level, predict the next word in a sentence or phrase. When these are scaled and layered to give multiple levels of inference, the results can be astonishing. Now, with GPT-4, they are sometimes exceeding human-level abilities.
Here's the thing: these are not limited to a single niche. GPT-4 shows that it can do things it was never explicitly trained for. The recent announcement from Open AI about a plug-in architecture means GPT-4 can access specialist services through the Internet. This gives it superpowers. It remains to be seen what will happen as the number of plug-ins grows. It is certainly possible to envisage that GPT-4 will become a new general-purpose computing platform: an intelligent app that can do anything
The biggest concern - while also being the most exciting! - is contained in the term "emergent capabilities". These characteristics and abilities were not programmed, not trained for, and certainly not expected. They literally emerge from the "black box". That's amazing and exciting but also dangerous in multiple ways.
If they're not expected, they're not contained. You can't build guard rails for something you know nothing about.
What's more concerning is that when we encounter emergent properties, we have no idea how they work. There's no documentation or physical audit trail to lead us to the mechanism behind new abilities.
So we won't know how far they reach, and we don't know how they'll react in a new situation. Imagine flying in an aircraft that has yet to be tested and where you have no idea how it works.
The letter seems a good move. Progress can only be beneficial when it's understood enough for it to be safe. No one is saying that AI research should cease. It just seems like we'd all be better off if we could agree on where this is going, and how it's going to get there.
Tags: Technology AI
Comments