By Allyn Robins*
Comment - Artificial Intelligence (AI) is the technology story of 2023. The chief executive of Google says its impact will be "more profound" than "fire or electricity". McKinsey happily reports that 'Generative' AI alone could "add trillions of dollars of value to the global economy", while a coalition of industry luminaries sign a statement arguing that "mitigating the risk of extinction from AI should be a global priority". Amazon has just invested billions of dollars into an AI company that it hopes can help it rival Microsoft's investment in OpenAI. AI is passing the bar, tutoring schoolchildren, and producing viral videos.
In the face of all of this burgeoning wave of hype and investment, I say - enjoy it while it lasts. Because it won't.
First, AI isn't developing as fast as the blinding pace of announcements and investments can make it seem. All the big breakthroughs that have been reported this year, all of the attention-grabbing headlines and impressive demonstrations, are the result of decades of slow, dedicated work that occurred outside the public eye.
The first pebble in the current avalanche of AI hype - ChatGPT - was literally released as a marketing move to create anticipation for GPT-4, a product that had already largely been built. While updates and competing models have trickled out since, the difference in the AI systems available to the general public in 2022 and today is not representative of the real pace of progress. And while some AI industry leaders are talking about the imminent possibility of "Artificial General Intelligence" (AGI) - effectively an AI that can truly 'think' rather than just do a specific task - people were doing the same in the 2000s, and the 70s, and the 50s. They were wrong then, and chances are they're wrong now - though that's a whole other column.
Moreover, most of the biggest advances in AI capability in the last decade have come from scale - larger and larger sets of training data creating larger and more capable AI models. But that's not a trend that can realistically continue; by some estimates, almost all publicly available text and images on the internet have been scraped and incorporated into AI models already, a process that has generated a huge number of lawsuits that are likely to curtail similar collection in the future. And ironically, AI-generated media is now so widespread online that it will be very difficult to avoid picking up in future mass-collection efforts - which is a problem, considering that training AI systems on AI data rapidly degrades their capabilities. More human training is unlikely to be a viable solution - the underpaid AI trainers that large tech companies rely on are already using available AI tools to automate the 'training' they're supposed to be bringing a human touch to.
AI will of course continue to develop. But making money with it will be harder than you might think. Large, sophisticated AI models are incredibly expensive to develop and run, and almost every ambitious AI project right now is haemorrhaging money - kept afloat either by massive institutional resources or infusions of cash from deep-pocketed investors who trust that they're buying into the 'next big thing'.
To hook users, most AI products are being sold at a loss - and while some will pony up when the money-squeeze comes, many others will abandon the tools or switch to less-polished but free open source alternatives. And AI's limitations, such as its brittleness outside controlled conditions and its tendency to 'hallucinate' - a problem that many experts argue is probably unfixable - mean that it's far from simple to deploy it safely and effectively. Finally, Google and Microsoft are building AI into their widely used office and productivity products - a move that will almost certainly sink a huge number of smaller companies offering similar services, but it remains to be seen whether it will actually meaningfully boost the profit margins of their already omnipresent enterprise software.
But what about all those rosy financial predictions and big business investments? Well, those are exactly what we'd expect to see in a bubble. There is a strong financial incentive for companies of every size to hype AI, to make investments in it, and to claim that they're 'AI leaders', because at the peak of a bubble doing so can deliver quick boosts to their stock price. When AI products have to actually start turning a profit, expect that to change very quickly. How many of the fêted blockchain companies of the late 2010s have survived, let alone thrived? How many of those who breathlessly predicted that those companies would 'change the world' have pivoted to saying exactly the same thing about AI?
Of course, AI is going to do far more to change the world than blockchain ever did. It's an incredibly powerful set of technologies with applications from the groundbreaking to the mundane. It's poised to be the most impactful technology of the decade, at a minimum - but it's important to recognise that we're in a bubble, and that many of the products and companies garnering so much attention today may not amount to much in the long run. By recognising where we are in the cycle of hype, we can position ourselves - as individuals, as businesses, and as a nation - to engage with AI as it actually is, and not as its most avid and uncritical cheerleaders present it. There's a lot of good that can be accomplished with AI, but the first step has to be planted on the firm ground of reality.
*Allyn Robins works for Brainbox, a think-tank specialising in law, technology and policy based in Auckland.