Skip to main content
How Will ChatGPT Shape Business, Society and Employment?

Strategy

How Will ChatGPT Shape Business, Society and Employment?

How Will ChatGPT Shape Business, Society and Employment?

Will next-generation AI systems such as ChatGPT deliver the productivity boost modern economies need – and are we ready for it?

In the last 200 years, we’ve seen productivity growth of 1.5 to 2 percent per year on average, with the emergence of general-purpose technology such as the steam engine, electricity and computer driving rapid growth in the years following their introduction. However, productivity growth has slowed in advanced economies in the last 10 to 15 years. “Could artificial intelligence (AI) be the next general-purpose technology to drive productivity?” asked Morten Olsen, Associate Professor of Economics at the University of Copenhagen.

Together with Theodoros Evgeniou, a Professor of Decision Sciences and Technology Management at INSEAD, and Phanish Puranam, the Roland Berger Chaired Professor of Strategy and Organisation Design at INSEAD, Olsen was speaking at INSEAD Tech Talk X webinar about how the next generation of AI systems will shape business, society and employment.

Chess grandmaster Garry Kasparov wrote in Deep Thinking that a (weak) human working with a machine, with a strong process to work together, can produce better outcomes than when AI and humans work alone, said Evgeniou. According to Kasparov, building a better process at the human-machine interface requires humans to be informed. In other words, we need to know the technology to understand its potential, limits and challenges.

Unpacking ChatGPT

ChatGPT is a specific product in a class of technologies known as large language models (LLMs) – an application area of machine learning (ML), itself at the heart of modern AI. Like all ML algorithms, ChatGPT looks at a large amount of data, finds “patterns” – namely regularities with high enough probability – in the data and uses these patterns to make predictions such as about what word to generate next given the previous ones, explained Puranam.

In school, we may have sat for tests where we were shown a sequence of shapes such as a triangle, a circle, a star and a triangle, and asked to predict what comes next. In simple terms, that's what machine learning does, he said.

The term “GPT” is derived from the phrase generative pre-trained transformer. It is “generative” as it generates text as a prediction of what users are likely to find useful based on their questions or instructions. It's pre-trained by an algorithm called a transformer using a large corpus of text.

In a nutshell, said Puranam, LLMs such as ChatGPT are complex ML algorithms that find patterns in very large volumes of text generated by people in the past and use them to predict what specific users might find useful based on their inputs. The complexity is evident, with an estimated 175 billion parameters in ChatGPT and an estimated 170 trillion parameters in GPT-4, an advanced version of ChatGPT.

To appreciate the potential of LLMs such as ChatGPT, said Evgeniou, it is important to understand that they are not necessarily products, but foundation models. Since foundation models are used in different downstream applications, what we are seeing is just the tip of the iceberg.

Foundation to a myriad of applications

ChatGPT is most commonly used to synthesise or summarise text, translate text to programming language (such as R and Python) and search. In the business context, Puranam provided examples of applications such as copywriting for marketing materials, customer interaction, synthesising large legal documents, writing operational checklists and developing financial summaries.

Due to ChatGPT’s ability to generate text from different viewpoints, it can widen perspectives and improve creativity potentially beyond human imagination, said Evgeniou. For example, you can generate short summaries of text such as your company’s mission statement from various perspectives, say a European, American, Chinese, 10-year-old or 80-year-old person.

It’s already being used in business to enhance creativity and business success: Coca-Cola, for instance, used AI effectively to engage its customers in its recent marketing campaign. But creativity is not limited only to creative fields, stressed Puranam. The technology can leverage human creativity by generating alternatives for business plans, business models and so on. However, humans ultimately need to evaluate the quality of the content generated.

In more advanced applications, Olsen stated that innovation is typically driven by fundamental and corporate research. The more AI can help in these processes, the faster we can see real innovation, just as how using AI in biomedical research has reduced the time taken for drug discovery and protein-folding predictions to a mere fraction of the time taken by a human. 

The fact that it is a foundation model means that ChatGPT is the foundation to a plethora of applications. Evgeniou believes that AI can augment human intelligence, leading to the creation of new needs that we didn’t even know of and creating new companies, products, markets and jobs at a much faster pace.

What does ChatGPT mean for business?

While ChatGPT brings new possibilities, we need sound processes to enable humans and AI to work together effectively. One of the most important lessons in technology adoption, said Evgeniou, is that organisational change is needed to implement and get value out of it successfully.

In addition, trust is a necessary ingredient in technology adoption. But trust is a double-edged sword – when users place too much trust in technology, it can lead to overconfidence in decision-making or narrative fallacy, where people make up stories based on the narratives generated by LLMs. In high-risk applications, it can even jeopardise their safety.

Trust is also associated with the question of liability, as Evgeniou noted: If professionals such as doctors, lawyers and architects make mistakes as a result of prioritising AI’s decisions over their own judgement, are they culpable? Would they be covered under malpractice or professional liability insurance?

From the perspective of consumer trust and safety, the exponential growth of content made possible with technologies such as ChatGPT has made content moderation – a critical issue for our online trust and safety – more challenging for online platforms. Moreover, the role of AI in creating information filters and bubbles has been put in the spotlight.

Families of the Paris terrorism attack victims are suing Google for the role of its AI recommendation algorithm in allegedly promoting terrorism. The Communications Decency Act (Section 230) is being challenged in the United States Supreme Court for the first time, which raises the alarm on the potential dangers of recommendation algorithms and opens other online platforms that employ AI to litigation risks, said Evgeniou.

Talent development is another consideration. Puranam cautioned that over-reliance on LLMs can atrophy our skills, particularly in creative and critical thinking. Companies should avoid the myopic view of automating lower-end work just because technology allows for it. “In some professions, you can't be a partner without having been an associate, and you can't be a full professor without having been a research assistant,” he said. Therefore, automation without due consideration for talent development can disrupt the organisation’s talent pipeline.

Evgeniou proposed that companies put in place guidelines to ensure that AI is harnessed safely, specifying who, when and how it should be used. “In AI adoption, we need to put humans in the driver’s seat to monitor the behaviour of AI,” he said.

Is society ready?

While some people are understandably concerned about being replaced by ChatGPT, technological unemployment hasn’t happened in the last 150 years, said Olsen. AI is not expected to lead to massive unemployment in the next five to ten years, he assured, so the more relevant concern is how it would affect income distribution.

New technologies can bring about two effects: productivity and substitution. Productivity effects will only be apparent in productivity statistics over time, as economist Robert Solow observed. As for substitution effect, it affects individuals to different extents depending on their skill level.

In the 1850s, low-skill-biased technological change saw the displacement of skilled shoemakers by unskilled workers who mass produced shoes in factories. On the other hand, the skill-biased technology that enabled factory automation in the 1980s to 2010s favoured those with university degrees over low-skilled factory workers. Currently, it is unclear which group will benefit from LLMs.

At a more fundamental level, there is the question of whether LLMs can be truly unbiased and inclusive. Understanding how it learns reveals why it can be inherently biased. ML algorithms such as ChatGPT build knowledge by unsupervised learning (i.e. observing conversations), supervised learning and reinforced learning where experts “train” the models based on users’ feedback, explained Puranam and Evgeniou.

This means that ChatGPT “learns” from individuals who train and use it, and the machine adopts their values, views and biases on politics, society and the world at large. Therefore, while ChatGPT can be democratising, it can also be centralised depending on the experts who train it, said Puranam.

Moreover, the risk of misinformation is heightened due to the speed of content being proliferated and how content can be weaponised to threaten democracies and institutions. It is even now expected to influence election campaigns, said Evgeniou. Puranam also cautioned that people whose social lives exist only in online channels are at high risk, as they may fail to judge truth from falsehood. Olsen agreed that ChatGPT can perpetuate the views of individuals who are already siloed in their own informational bubbles online.

The panellists were cautiously optimistic and agreed on the need for appropriate management and regulation to ensure ethical and responsible use of technologies such as ChatGPT.

Learning to work together

In practice, regulation will always fall behind tech innovation. The European Union Digital Services Act to safeguard online safety fell behind as soon as it was enforced in late 2022, since it only covers online platforms such as Facebook and Google but not ChatGPT, even though the latter aggregates online content.

Similarly, although foundation models can be used in high-risk products downstream, they fall through the cracks in AI regulations. As big tech companies continue to develop new foundation models, this could unleash the proliferation of downstream products. If the foundation models remain unregulated, they may be the single point of massive, cascading failures.   

But regulating an emerging, evolving technology across different geographical regions comes with challenges. AI algorithms adopt values from the data used to train them, which can result in different AI culture across regions. This increases the complexity of regulation, said Evgeniou. Even if regulations are the same in different parts of the world, the implementations and results will differ not only because of different legal systems, but also different value systems.

In spite of the challenges, a combination of actions by data scientists, businesses and regulatory bodies can improve tech trust and safety. Transparency and trust often go hand in hand, and it pays when businesses are transparent in their engagement with customers. For instance, they can inform customers when content is generated by ChatGPT and when customers are interacting with a machine instead of a human.  

An ongoing development to ensure that AI is more aligned with human values is the field of reinforcement learning with human feedback (RLHF), said Evgeniou. By incorporating human feedback, we can try to improve the quality of the AI’s output based on human values. However, according to Evgeniou, we are only at the beginning of solving the AI value alignment problem.

In the meantime, while it is proven that AI can beat a human at chess, this is not the case in all fields. There is potential to leverage AI to complement humans, which requires a better understanding of the opportunities and limits of combining the two. As LLMs continue to evolve, all the panellists saw human-machine ensembling as a promising area to use AI to improve the quality of human thinking and identify the necessary conditions to achieve it.

Edited by:

INSEAD Knowledge

About the author(s)

Related Tags

Artificial intelligence

About the series

AI: Disruption and Adaptation
Summary
Delve deeper into how artificial intelligence is disrupting and enhancing sectors – including business consulting, education and the media – and learn more about the associated regulatory and ethical issues.
View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.