Supported Browser
  • About Us
  • Subscribe
  • Contact us
Entrepreneurship - BLOG

How Should Humans Collaborate With AI?

Phanish Puranam, the Roland Berger Chaired Professor of Strategy and Organisation Design at INSEAD |

When bringing algorithms and employees together, businesses should respect rather than ignore human preferences.

This article is part of a series titled “The Future of Management”, about how changes in culture and technology are reshaping what managers do. INSEAD professors Pushan Dutt and Phanish Puranam serve as academic advisors for this series.

As businesses increasingly adopt AI-driven decision making, experts agree that the most interesting questions are not about whether humans can beat machines or vice versa but how the two forms of intelligence can most fruitfully collaborate – and how organisations can best facilitate those collaborations.

In a recent essay published in the Journal of Organization Design, I pointed out that there are at least four distinct forms of division of labour between humans and AI when it comes to decision-making tasks. Humans and AI can either specialise to perform different tasks or not, either in sequence or in parallel. Specialisation seems the way to go when there is a clear advantage for one type of intelligence at some portions of a task.  For example, activities such as leading a meeting or conducting sales calls are best left in human hands, while a number-based assessment of the relative financial performance of companies in a portfolio clearly plays to algorithmic strengths.

However, even when there is no clear advantage, one might still gain from the diversity in how humans and algorithms solve the same problem. For instance, we might pool estimates of profitability generated in parallel to improve investment decisions or use one intelligence to act as a check – a “second opinion” – in medical diagnosis. This approach was meant to stimulate exploration for the configuration that might produce the best results for any given decision task.

But shouldn’t it matter in what configurations humans are most comfortable working alongside AI? For instance, given their relative skills, even if specialisation in sequence may technically be the better configuration, if humans inherently distrust such a configuration, parallel work without specialisation may be easier to implement between humans and algorithms. We know that humans have trust issues (often well-justified) with new technologies in general, and AI algorithms have not proved to be an exception. It may be not only consistent with humanistic values but also good for business to respect rather than ignore such concerns. 

So are there configurations of collaboration with AI algorithms that humans are more or less likely to trust? Do such preferences vary across countries and sectors? To explore this idea, Research Associate Ruchika Mehra and I recently launched “The Bionic Readiness Survey”, the first step on what we hope will be a comprehensive and fascinating journey to help organisations learn more about how humans and AI can work together effectively. Should you wish to participate, it should take you between six and 12 minutes to answer the questions, depending on which of our randomly selected surveys you receive. (All participants who finish the survey get a one-page downloadable “cheat sheet” of cutting-edge curated content – links to articles/videos about how AI is affecting organisations.)

With the data from this survey that has already gathered hundreds of responses from around the world, we hope to offer reliable insight on which configurations people may systematically prefer or dislike. The survey also gives an overall picture of how ready respondents are for the bionic future – one in which humans and algorithms work together. We can also slice the data to compare your organisation’s responses to the overall sample. Just drop us a line in advance to arrange it.   

There is little doubt that human-AI collaboration will play a significant role going forward. Organisation designers have the responsibility of making sure humans like the terms on which this occurs.   

Phanish Puranam is the Roland Berger Chaired Professor of Strategy and Organisation Design at INSEAD.

INSEAD Knowledge is now on LinkedIn. Join the conversation today.

Follow INSEAD Knowledge on Twitter and Facebook.

Comment
Tathagat Varma,

This is a very critical question, i.e., how would the humans and smart machines and algos cooperate rathar than compete for jobs. A new division of cognitive labor is definitely on the cards, and the ones that manage to do it well stand to gain the most out of this synergy.

Add a comment Already a member?
We welcome your comments and encourage lively debate. However, to ensure the quality of discussion, our moderators reserve the right not to publish personal attacks, abusive comments or overly promotional content. You can view our Terms & Conditions
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Your Privacy

INSEAD takes your privacy very seriously. For this reason, we inform you that the data collected via the form above is processed electronically for the purpose(s) specified in this form and will not be used outside this framework. In accordance with the Data Protection Act of 6 January 1978 amended by the GDPR, you are granted statutory rights of access, modification, update, deletion and limitation of treatment of your personal data. You may exercise these rights at any time by writing or sending an email to INSEAD at [email protected]. You have the right, on legitimate grounds, to object to the collection and processing of your personal information. For more information, please see our privacy policy.