Skip to main content
Three Objectives for Moving Forward With AI.jpg

Leadership & Organisations

Three Objectives for Moving Forward With AI

Three Objectives for Moving Forward With AI

Taking ownership of AI is about more than technological competencies – it’s an overarching organisational challenge.

If Big Data is the oil that fuels the digital economy, artificial intelligence (AI) is the automobile. It can transport companies from the old, largely physical asset-based model to the new, information-based world, where competitive advantage goes to those who squeeze the most value out of their data in the least amount of time.

Both powerful incumbents trying to evade disruption and SMEs aiming for growth will have to learn to drive AI, or else yield to the juggernaut of Big Tech currently monopolising the space.

Taking ownership of AI means not just incorporating the technology but ensuring it creates value for business and society. Three overarching challenges are involved: demystifying AI so that employees and managers find it approachable; determining when and how to implement AI; and defining ethical and moral boundaries.

Recently, I was moderator for the AI Forum hosted by Digital@INSEAD. Dozens of speakers (ranging from INSEAD colleagues to global entrepreneurs) gathered in Singapore to share ideas, insights and first-hand experiences related to the challenges of AI adoption.

Demystifying AI

People tend to resist technology they find intimidating. This could presumably be a barrier for AI, as it is not well understood among managers, particularly older ones. But according to Phil Parker, INSEAD Chaired Professor of Management Science, we needn’t regard AI as something new and unfamiliar. Kicking off the forum, he brandished a pocket calculator as an example of “artificial intelligence” that blew people’s minds in its day but has long since been demoted to a hardware component we all take for granted. “In 20 years, we may think of self-driving cars as ‘just cars’, just like no one is now impressed that we have pocket calculators to do square roots for us,” Parker said.

Managers who are mystified by AI may try to buy their way out of having to work with it directly, but Parker cautions that’s the wrong course. “The last thing you do is build a data centre. Do a full-blown audit; know what you want and where the financial value is.”

Perhaps most excitingly, Parker encourages anyone in business to try getting their hands dirty with coding. Using free online tools – including YouTube explainer videos – and the open-source code available on GitHub, newbies can learn many of the basics and start building rudimentary machine learning and AI solutions. In a separate talk on how to accelerate understanding of these areas, Parker told attendees their goal should be to “learn enough [coding] language so you can feel comfortable doing anything”. With programming languages such as Python, however, the sheer weight of buzzwords and jargon can be daunting. Parker is currently addressing this problem by designing and delivering micro-classes (as part of INSEAD’s Global Executive MBA) to teach “the minimum amount of code to get people to the next level”. After a crash course in coding lasting a few hours or so, Parker says, “people go ‘Oh my God!’ They didn’t realise it was that accessible. Two start-ups were launched immediately afterwards.”

Determining applications

Once they have worked past their algorithm aversion, leaders and managers must decide where to apply AI-driven solutions in their business.

In his talk, Phanish Puranam, Roland Berger Chaired Professor of Strategy and Organisation Design, argued that algorithms could be used to help firms find better ways to work. For example, companies could build a “digital twin”, or computer model, of their own social networks and the attitudes of the important players. Before undertaking an expensive change initiative, they could trial it virtually to gauge its chances of success.

Narrowing his focus from the organisational to the managerial level, Puranam, who’s also Director of INSEAD’s “AI for Business” open enrolment programme, noted that human intelligence and artificial intelligence both boil down to pattern recognition. The capability to spot and exploit patterns amid a vast dataset is what allowed the AI system AlphaGo to beat Go world champion Lee Se-dol in 2016. But that does not mean that AI can do everything better. There are many tasks that only humans can do well today, and others where a combination of humans and algorithms can outperform either on its own. “A smart human and a smart machine can make predictions and both be wrong, but as long as they are wrong in different ways, the average result can hew closer to actual outcomes,” Puranam said.

He then presented a checklist for identifying areas where using algorithms would be most valuable. It could mean picking problems where a marginal increase in accuracy produces disproportionately large benefits or those for which clean, reliable data are actually available. On the latter point, Puranam mentioned that though it seems obvious, “this is the number-one constraint. Companies have lots of data in theory, but it’s not in one place; no one knows its quality. Integrating it is a nightmare.”

Defining the boundaries responsibly

AI, like any new technology, is merely a tool (albeit an extremely powerful one) designed to serve human needs. But ethical and moral issues arise when large-scale automation threatens human livelihoods, data harvesting runs afoul of personal privacy, or pre-existing biases and inequalities (e.g. gender gaps in hiring) are baked into the algorithms. In her talk, Sunita Kannan, a director at Accenture, discussed what she termed the “three pillars of responsible AI”: humans at the centre, regulatory compliance and ethical design.

Presumed breaches of ethics can also affect consumers’ psychological well-being and ultimately hurt companies’ bottom line. Klaus Wertenbroch, Novartis Chaired Professor of Management and the Environment, shared his research on how AI and Big Data can threaten customers’ sense of autonomy. He suggested that there is something innately paradoxical about the commercial use of AI: People love algorithmic spoon-feeding until it seems to transgress an almost imperceptible line delineating individual free will. If they feel that bots are compromising their individual agency, consumers may push back, even against their own best interests.    

One of Wertenbroch’s studies found that when customers believed their future choices could be predicted based on past patterns, they gravitated away from their preferred option and chose differently. In other words, consumers violated their own preferences in order to re-establish their sense of autonomy. But when predictability-related language was replaced by references to consistency, customers no longer felt inclined to cut off their nose to spite their face. Though predictability and consistency effectively meant the same thing in Wertenbroch’s research context, only the latter reaffirmed consumers’ autonomy and individuality – and thus was acceptable.

It can be tempting (and profitable) for companies to use all the data at their disposal to make their offerings more attractive to consumers. For example, in a 2017 PNAS study employing a database designed to correlate likes with personality profiles, about three million Facebook individual users were classified as either introverts or extroverts based on just one “like” that they had left on a branded page. Using this coarse classification alone, the researchers delivered targeted ads that drove up conversion rates (i.e. sales) by about 50 percent (0.01% vs. 0.015%).

But Facebook’s recent travails should be a cautionary tale. Psychographics firm Cambridge Analytica surreptitiously employed account-holder data from the social network to create targeted ads that may have helped swing the 2016 U.S. presidential election in favour of Donald Trump. Nearly US$40 billion were wiped off Facebook’s market value the day the scandal broke.

“My advice would be to tread carefully and leave some surplus on the table,” Wertenbroch said. “Companies can maximise profits in the long run by positioning themselves as ethical not only through talk, but through what they do.”

About the author(s)

About the series

AI: Disruption and Adaptation
Summary
Delve deeper into how artificial intelligence is disrupting and enhancing sectors – including business consulting, education and the media – and learn more about the associated regulatory and ethical issues.
View Comments
(2)

Anonymous

17/08/2022, 12.40 am

Thank you, Mr. Davis. AI is inevitable and all must learn to find the best solutions for incorporating AI and using it for an organization's best ROI. To find the best solutions, one needs to ask the optimal questions about the AI tool and the human interaction that best suits the needs. Outside sourced professionals in executive coaching can fill the gap between asking the best questions and eliciting the most constructive answers to the AI integration. An optimization coach is essential in these times.

1
0

Tathagat Varma

11/04/2022, 07.01 am

It will be interesting to see how humans pushback to retain their free will and agency against the onslaught of algorithmic nudges, recommendations and eventually choices. Initially, it is always a source of great amusement and even admiration as a "magic", and then it moves to the point where it is seen as a new-found source of productivity...before it degenerates to being too intrusive and then the real pushback begins.

0
0
Leave a Comment
Please log in or sign up to comment.