Let’s cut through the hype & hysteria and focus on what is really happening right now in the world of Artificial Intelligence.
With the release of chatGPT (GPT version 3.5) in November 2022 and the follow up GPT 4 in May 2023 we have seen a mix of excitement and fear of what AI is capable of. From AI alarmists, to AI evangelists, we are seeing the full spectrum of hype around the generative and conversational AI. With the fastest product to get 100M users, the phenomena has taken the world by surprise.
The AI experts around the world that really understand what the technology is and do not get distracted by some of the frankly amazing results, will agree that the current models are nothing more than imitations of human intelligence, without any of its own cognitive capabilities. It is simply a glorified parrot, spitting out an averaged response based on the millions of documents it has consumed during its training. There is a fundamental flaw to the architecture, it’s learning and overall system design that causes random hallucinations.
What we are seeing now is much more Artificial, with a capital A than intelligence with a lower-case i. Essentially what I mean by this, is the current AI techniques have very limited levels of intelligence. This is a major problem for the AI industry and the investment community looking to capitalise on the current bubble of enthusiasm.
We need to move from Ai to aI, that is more focus on true intelligence. This requires much better knowledge of the internal architectures, learning methodologies and approaches to ensure we have a system that demonstrates real cognitive features, such as reasoning, understanding, and common sense to name a few.
It is easy to demonstrate that the current generative AI falls short in all these areas. AI in general is very brittle and fragile, easy to break and error prone. While humans are not always perfect and make mistakes, we expect our machines to be 100% all the time (one of the reasons why we have yet to achieve level 5 automation with autonomous cars).
However, it is also very easy to be a critic – and we should acknowledge the significant achievement and advancement we have seen over the last decade of AI research and practice. Recent AI achievements have delivered some spectacular applications that have made a real positive impact to humanity and will continue to do so across many different industries.
We should also draw attention to the fact, that for many applications, simply replicating human intelligence, is good enough. We may not need to achieve full AGI for many simple applications – if we accept its limitations and faults, then we have workable solutions that augment and accelerate human endeavours and add value to many business activities.
But, moving us towards aI or more commonly referred to as Artificial General Intelligence, will create a more robust, connected, transparent and responsible AI applications and systems, that can manage more complex tasks and be explicit on its reasoning. Creating an enterprise brain that can see across the value chain of a business, able to make decisions in one area that can have positive effects elsewhere in the organisation. This connected intelligence will make end-to-end decisions that cross organisational boundaries and find opportunities for revenue growth and operational efficiency that would have been almost impossible to achieve without that depth of analysis and insight.
While we appear to be in a phase of exponential change in the world we live, and certainly we have seen dramatic progress in the last 10 years or so, the field of AI is 70+ years old and we have only scratched the surface towards human levels of intelligence. AI experts, as ever, are divided on how long it will take us to achieve AGI – optimistically 10 years, probably 100 years, worse case 1,000 years. Personally, I think in the next 10 to 20 years, worse case 30 years (depends on if we have another AI winter) is most realistic to deliver AGI.
What is clear, is that we need to build a lot of supporting infrastructure to deliver more ethically and responsibly produced AI models and applications. AI assurance is a key element to this – regulations may help direct more investment into these areas to provide the underlying foundations. Remember the companies making the shovels in the goldrush made the best returns on investment. While building the next breakthrough foundational model will get the headlines, it’s the core platforms that will enable us to move forward with fully intelligent systems.
Investors, Governments, Regulators, Institutions, Universities, Corporates and Start-ups all need to support this wider vision for long-term AI success. If we do not, we risk another AI winter, or worse, an AI ice-age.
We need to stop the hype, and the doomsday alarmists. AI is not going to wipe out humanity – it’s a tool, like any other piece of software – we can just switch it off – test it in a simulated world (one real application for the metaverse) – and ensure that AGI has common sense, a moral compass, emotion, empathy, and goal alignment. It’s good that we are all worried and concerned about this – as it gives the AI industry the remit to build the supporting tools and frameworks to ensure all the guiderails and tollgates are in place to avoid a potential extinction event. No more terminator, no more end of the world fantasies. While I say this, and as a sidenote – I very much accept the role of bad actors and the risks with autonomous weapons (and similar malicious implementations) – we should ban the production and use of such things globally (many countries have already signed up to such initiatives). Don’t forget that technology by itself is neutral – it is how we decide to use it than gets us into trouble.
What is a much more likely scenario, is the fusion of man and machine, in many ways we have already started that integration, try to separate people from their mobile phone and you will see our connected dependency. Introducing the capabilities of AI into this fusion, with augmented decision-making will compound our lives and achievements exponentially.
What needs to happen is serious funding made available to these lower-level tools that will serve as the key foundations for AGI capabilities. Now is the time to get this right and we need to work together to make this happen. There is another generation of AI companies waiting to emerge that will lead the way to our AGI driven future.
It is pleasing to see that the UK government have agreed to setup an AI Safety Summit in the autumn – But I sincerely hope that a range of AI experts, from researchers to practitioners, from both large and small institutions and companies are invited to participate, to provide a broad mix of perspectives and give opportunities to all to help build the responsible AI of the future.