Skip to main content
digital glitch

Responsibility

The Hidden Forces Driving the AI Boom

The Hidden Forces Driving the AI Boom

Extraction, power and the dark side of the race for AI dominance.

Amid AI’s promise to improve efficiency, enable new innovations and supercharge corporate profits, there is a dark side to the technology that can’t be ignored. Water-guzzling data centres are reportedly causing pollution and water shortages in rural communities. Workers in emerging economies are paid less than US$2 an hour to sift through toxic content in order to make chatbots safe for public consumption. And ChatGPT has been shown to give vulnerable teenagers harmful advice, including instructions on how to compose a suicide letter.

In a recent INSEAD Lifelong Learning webinar, Professor of Technology and Business Theodoros Evgeniou and INSEAD alumnus Tim Gordon (MBA ’00D) from Best Practice AI were joined by author and award-winning investigative journalist Karen Hao for a provocative conversation on the power dynamics, ethical blind spots and structural inequalities shaping AI’s future.

The battle to build an AI empire

Hao was the first journalist to profile OpenAI in a 2020 article for MIT Technology Review, and the company features heavily in her new book Empire of AI. From unbridled Silicon Valley ambition to global exploitation, the book explores how an original mission for safety turned into a high-stakes race for dominance – fuelled by vast data, invisible labour and mounting environmental costs.

“We really need to think of companies like OpenAI as new forms of empire,” Hao said. “If you consider the sheer economic and political power of companies like OpenAI, Google, Meta and other tech companies that have now pivoted to aggressively trying to dominate AI development, you could argue that there’s pretty much no nation-state government that is more powerful than these companies right now.” 

Hao drew four parallels between the empires of old and the tech companies of today: They lay claim to resources that are not their own; they exploit an extraordinary amount of labour; they monopolise knowledge production; and they engage in a competition narrative of an existential race, positioning themselves as “good” empires on a mission to bring progress and modernity to the world.

By using their financial resources to exert influence over everything from politics to the media, Hao argued that tech companies have wrestled a controlling influence on the mechanisms used to regulate AI and shape how it is perceived by the public. They have also snapped up most of the world’s top AI researchers, to the point where the actual science has been distorted – much like how climate research would be skewed if most of the world’s climate scientists were bankrolled by fossil fuel companies.

An extreme form of capitalism

Hao discussed how the current AI arms race represents an extreme form of capitalism – one underpinned by a deeper ideological drive that has parallels with colonialism. From her perspective, these tech companies “pray at the altar of scale” and are trying to expand their AI models in a way that is not only unnecessary for what they are trying to achieve, but also doesn’t make business or financial sense.

“They are burning an extraordinary amount of money and have not, in any way, developed a business model to generate the comparable amount of revenue that would allow them to return a profit,” she said. “But because they see AI as a narrative cloak around the ability to continue expanding to ever further reaches, this continues to perpetuate their hunger for building these AI systems and pouring more and more resources into it.”

Pointing out that every industrial revolution has been grim for the people involved, Gordon asked Hao if the ongoing use of labour in emerging economies to train AI models should be viewed as exploitation, or if it could be a potential development pathway for these countries.

In response, Hao spoke about her fieldwork in Kenya, where she met with workers performing content moderation for what would become ChatGPT. They were tasked with labelling reams of disturbing content – from violent and sexual content to hate speech – so that OpenAI could develop a filter to prevent it from reaching ChatGPT’s eventual users. Yet, companies that have spent billions on AI development can’t even pay all these workers US$2 an hour.

“The workers advocate for it to be a real dignified economic opportunity that provides skill-building, a career ladder, good, stable pay and healthcare benefits,” Hao said. “But the problem is that this is almost never actually the case.”

General-purpose vs. task-specific AI

Evgeniou then brought up a commonly cited argument by AI believers: What if we just invest in developing and scaling the technology now, and then use its capabilities to address these negative externalities in the future?

Hao’s criticism of this perspective is that we simply may not have the planetary or human resources to waste. Perhaps most importantly, she expressed scepticism as to whether going all-in on scaling general-purpose AI models will lead to a technology that can solve all these problems.

We must not sleepwalk into a future we don't like… and definitely not a future shaped by just a few, let's say, self-dealing AI emperors.

Rather than general-purpose models, Hao believes that task-specific AI development is a smarter way forward. This tailored approach is not only less costly in terms of resources but also uses more curated data to develop high-performance models that address specific needs.

“[Big Tech] companies are trying to convince us that every single person needs a rocket for everything, so that you should be using a rocket to commute from Paris to Madrid. But you should actually be using much more efficient, tailor-made, task-specific transportation modes to achieve that purpose,” Hao said. “In the same way, we should be moving all of the capital we’re pouring into this nebulous, general-purpose tool towards task-specific tools that actually tackle medical advancements, drug discovery, healthcare applications and climate change mitigation, because we've already seen plenty of evidence that those solutions work, and they’re significantly less costly than this other approach.”

Individual agency still matters

Evgeniou wrapped up the conversation by stressing that while AI may be here to stay, how it’s developed is not a foregone conclusion. “We must not sleepwalk into a future we don't like… and definitely not a future shaped by just a few, let's say, self-dealing AI emperors,” he said. “We need to collectively consider alternatives and what a more balanced future could look like.”

Hao echoed these sentiments and emphasised that we all have an active role to play in shaping AI development. From investing in alternatives to putting consumer pressure on Big Tech firms, everyone has the power to assert their own preferences on how they think AI should be developed and deployed in their lives.

“The reason why I think it’s so important to understand that there is no inevitability… is because then it opens up much more space for anyone to get involved in shaping the future of technology,” Hao said. “The shape of a technology is never inevitable – it is the many human choices that determine how it ultimately looks and works.” 

For more on the subject, listen to this episode of The Age of Intelligence podcast hosted by Theodoros Evgeniou and Tim Gordon and featuring Karen Hao.

Edited by:

Rachel Eva Lim

About the author(s)

Related Tags

Artificial intelligence

About the series

AI: Disruption and Adaptation
Summary
Delve deeper into how artificial intelligence is disrupting and enhancing sectors – including business consulting, education and the media – and learn more about the associated regulatory and ethical issues.
View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.