Opinion: AI: For Who?

As an AI researcher, I’m expected to have answers to the thorny questions. “Will AI take our jobs?” “Will AI keep improving at the same speed?” “Will AI take over the world?”.

I wish I had the answers. I have some thoughts, of course, but even as an “expert”, I feel like a witness rather than a participant in the AI race. Modern AI progress is driven by the American private sector, not academics. And yet, the impact of AI is felt by the public.

I find myself coming back to different, more immediate questions: Who is AI really benefiting? And why do we allow it to be forced on us?

Take, for example, Google’s new AI Overview in Google Search. Overnight, these unsolicited “summaries” appeared at the top of our Google search results. Sure, they are sometimes helpful. But each AI summary may use 10x as much energy as a normal search and can even contain inaccurate information (hallucinations). And what about the websites that the AI has summarised? If people don’t need to click the links below the summary, then that’s less revenue for the websites and more for Google.

It’s not just Google. Meta has “helpfully” integrated a chatbot into Facebook, Instagram, and WhatsApp; Microsoft is ramming Copilot into every part of its software suite and Windows operating system. Even X/Twitter has been Grok-ified with Elon Musk’s right-wing large language model (LLM). Opting out of these AI tools is sometimes possible, if you find the right setting. But why did Big Tech assume we wanted to be opted in to start with?

This isn’t just a rant about technological change. These AI integrations have serious downsides. Beyond the masses of energy required (the US is now keeping coal mines open and reactivating nuclear power stations), we are already seeing other, equally worrying consequences. Generative AI may be making us dumber, as we become increasingly reliant on AI to answer questions and solve problems for us, reducing our use of critical thinking skills. I’ve seen this first hand in my Uni students – able to submit good-looking assignments but achieving performance barely at a C-level in the final exam.

And if AI does continue to rapidly advance, as the AI evangelists promise, where does that take us? ChatGPT helps my productivity (sometimes), but I’m not working any less. AI image generators are good for illustrating talks, but what of the artists whose hard work enabled this? We have AI-generated videos now, which mostly seem to be used to fool Boomers on Facebook and to spread disinformation. We’re told the next breakthrough will be virtual secretaries, AI lovers, and fully-fledged AI workers. After that? Humanoid robots and the nebulous “Artificial General Intelligence” (AGI).

I’m anxiously sceptical that these predictions are wrong – or at least, further off than this decade. Recent studies have shown that LLMs may not get us that much further on the AI rollercoaster. But, either way, we face the same problem. We — Humans — have decided to pursue this pathway where AI is embraced by companies in pursuit of economic growth and market share, regardless of what it does to our society and everyday lives.

Imagine a world where Big Tech isn’t allowed to just release these new technologies without genuine oversight and consultation. Imagine if we put rules in place first, rather than letting issues crop up and then (maybe) trying to solve them retrospectively. Imagine if we put social good ahead of commercial interests.

When I say these things, I get called naïve or told that it’s all too hard and we are only a small company in a very globalised world, so how can we stop it? I get that. But giving up is even worse. New Zealand has led the world on big issues before, and we could control the use of AI more if we actually tried.

But our political leaders don’t really seem to care. The Coalition Government has been silent on the topic, except to promote AI for economic growth.

When are we going to stand up and demand leadership, so that AI is used to benefit our society and not just Big Tech?