UK AI expert Professor Andy Pardoe worries we’re being far too complacent about what ‘AI’ is achieving
In The Queen’s Gambit, the recent Netflix drama on a chess genius, the main character is incredibly focused and driven. You might even say machine-like. Perhaps, you could go so far as to say she’s a little bit like an incredibly single-minded Artificial Intelligence (AI) program like AlphaGo.
Hoping not to give any spoilers here, but in the drama, Beth eventually succeeds not just because she’s a chess prodigy, able to see the board many moves ahead. She succeeds because she teams up with fellow players who give her hints and tips on the psychology and habits of her main ‘Big Boss’ opponent.
In other words, she employs tactics, strategy, reasoning and planning; she sees more than the board. She reads the room, one might say. Emotions play a huge part in all she does and is key to her eventual triumph in Moscow.
And this is why we’re potentially in a lot of trouble in AI. AlphaGo can’t do any of what Beth and her friends do. It’s a brilliant bit of software, but it’s an idiot savant—all it knows is Go.
Right now, very few people care about that. Which is why I fear we may be headed not just into another AI Winter, but an almost endless AI Ice Age, perhaps decades of rejection of the approach, all the VC money taps being turned off, lack of Uni research funding—all the things we saw in the first AI Winter of 1974-80 and the second of 1987-93.
Only much, much worse.
Even though Moore’s Law continues to be our friend, even that has limits
I’m also convinced the only way to save the amazing achievements we’ve seen with programs like AlphaGo is to make them more like Beth—able to ‘see’ much, much more than just ‘the board’ in front of them.
Let’s put all this in context. Right now, we are without doubt enjoying the best period AI has ever had. Years and years of hard backroom slog at the theoretical level has been accompanied by superb improvements in hardware performance—a combination that raised our game really without us asking. Hence today’s undoubted AI success story: Machine Learning. Everyone is betting on this approach and its roots in Deep Learning and Big Data, which is fine; genuine progress and real applications are being delivered at the moment.
But there’s a hard stop coming. One of the inherent issues for Deep Learning is you need bigger and bigger neural network and parameters to achieve more than you did last time, and so you soon end up with incredible numbers of parameters: the full version of GPT-3 has 175 billion. But to train those size of networks takes immense computational power—and even though Moore’s Law continues to be our friend, even that has limits. And we’re set to reach them a lot sooner than many would like to think.
Despite its reputation for handwaving and love of unobtanium, the AI field is full of realists. Most have painful memories of what happened the last time AI’s promise met intractable reality, a cycle which gives rise to the concept of the ‘AI Winter’. In the UK, in 1973 a scathing analysis—the infamous Lighthill Report—concluded that AI just wasn’t worth putting any more money into. Similarly fed up, once amazingly generous Defence paymasters in the US ended the first heuristic-search based boom, and the field went into steep decline until the expert systems/knowledge engineering explosion of the 1980s, which also, eventually, also went ‘cold’ when to many over-egged promises met the real world.
To be clear, both periods provided incredible advances, including systems that saved money for people and improved industries. AI never goes away, either; anyone working in IT knows that there’s always advanced programming and smart systems somewhere helping out—we don’t even call them AI anymore, they just work without issue. So on one hand, AI won’t stop, even if it goes out of fashion once again; getting computers and robots to help us is just too useful an endeavour to stop.
What we need is smart systems that are better at more than one ‘thing’
But what will happen is an AI Winter that will follow today’s boom. Sometime soon, data science will stop being fashionable; ML models will stop being trusted; entrepreneurs offering the City a Deep Learning solution to financial problem X won’t have their emails returned.
And what might well happen beyond that is even worse… not just a short period of withdrawal of interest, but a deep, deep freeze—10, 20, 30 years long. I don’t want to see that happen, and that’s just not because I love AI or want my very own HAL 9000 (though, course I do—so do you). I don’t want to see it happen because I know that Artificial Intelligence is real, and while there may be genuinely fascinating philosophical arguments for and against it, eventually we will create something that can do things as well as humans can.
But note that I said ‘things’. AlphaGo is better than all of us (certainly me) at playing games. Google Translate is better than me at translating multiple languages, and so on. What we need are smart systems that are better at more than one ‘thing’… can start being intelligent, even at very low levels, outside a very narrow domain. Step forward AGI, Artificial General Intelligence, which are suites of programs that apply intelligence to a wide variety of problems, in much the same ways humans can.
We’re only seeing the most progress in learning because that’s where all the investment is going
For example: we’ve only been focused on learning the last 15 years. But AI done properly needs to cover a range of intelligence capabilities, of which being able to learn and spot patterns is just one; there’s reasoning, there’s understanding, there’s a lot of other types of intelligence capabilities that should be part of an overall Artificial Intelligence practice or capability.
We know why that is—we’re focused on learning because we got good traction with that and made solid progress. But there’s all the other AI capabilities that we should be also looking at and investigating, and we’re just not. It’s a Catch-22: all the smart money is going into Machine Learning because that’s where we’re seeing the most progress, but we’re only seeing the most progress in learning because that’s where all the investment is going!
To sum up, then: we may not be able to stave off a Machine Learning AI Winter; perhaps it’s an inevitable cycle. But to stave off an even worse and very, very destructive AI Ice Age, I think we need to widen the focus here, get AGI back on its feet, and help our ‘Beths’ get better at a lot more than just ‘chess’… or we’re just going to see them turned off, one by one.
First Published in ITPro Portal on 30th November 2021