Supported Browser
  • About Us
  • Subscribe
  • Contact us
Strategy

Can Emotion Be Automated?

Chengyi Lin, INSEAD Affiliate Professor of Strategy |

The emotional literacy gap between bots and humans is expected to narrow, thanks to a series of technological advances.

When it comes to computational heavy lifting, artificial intelligence (AI) beats humans every time. But within the current technological limitations, no amount of number-crunching alone has yet enabled AI to develop emotional intelligence (EI). So far, there is no code for that. Even the most responsive chatbots rely upon speech recognition, natural language processing and other algorithms, making them incapable of reading between the lines of human communication.

For humans working in organisations, that’s very convenient. While some speculate that AI and machine learning will soon become key contributors to business strategy development, emotionally illiterate bots would be totally useless at strategy execution. For that, you need to be fluent in the extremely subtle and largely non-verbal language of collective emotion, which is often your only tool for gauging the progress of strategic change. Keeping EI and AI far apart limits the number of white-collar jobs that could ever be lost to automation.

However, as time and technology march on, the possibility of “emotional AI” is becoming less distant.  Anticipating the increasing importance of emotions in business, IBM, Microsoft, Google, Apple, Amazon and others have all started their own face and emotion recognition programmes (e.g. Microsoft’s Azure Media Analytics and Apple’s Face ID). Start-ups are also jumping into the race, working with academics to develop and train relevant algorithms. Examples include Affectiva, NVISO and Kairos for face analysis in images and videos, and Behavioral Signals and Cogito for voice analysis.

To prepare for the eventual arrival of emotional AI, we should look closely at the likely pros and cons, thereby taking stock of our complex feelings about emotional bots.

Big implications

Some degree of unease at the idea of robots giving and receiving emotional feedback is understandable. However, the outcomes needn’t all be negative. Like any significant technological advance, emotional AI offers huge potential benefits to humanity in at least two ways.

Benefiting research and application

First, there are possible mutual benefits for research and application. The renowned psychologist Paul Ekman and many researchers following in his footsteps have done exciting work on “micro-expression”. Based on these research findings over the past decades, machine learning algorithms can decode emotion from photos and videos. These algorithms could accelerate the evolution from the Facial Action Coding System (FACS) requiring human observers to automated real-time analysis. Consequently, analysis of these large samples of unstructured data may generate new pattern recognitions and advance academic research in related fields. More importantly, the advanced understanding in each subfield (e.g. verbal communications, facial expression, voice and body languages) could lead to a more comprehensive and integrative understanding of human emotions.

Perhaps most excitingly, combining emotion analysis with text analysis could also help detect depression and suicidal tendencies from image and video posts on social media. Some of these ideas are being tested in practice by start-ups.

Improve well-being beyond productivity

Second, emotional AI may increase productivity for business and well-being for employees. Computational tools as job aid could significantly enhance human performance. For example, it may take months or even years for a human observer to become effective in detecting micro-expressions. Trained computer algorithms could provide analysis of human emotion almost in real-time. Psychologists could have access to diagnostic test results like those available to medical specialists dealing with the body. Commoditisation of technology would also allow usage of such tools in all areas of customer service. For example, Behavioral Signals started offering call center representatives emotion recognition tools to better serve clients. This could shorten their training time, lower their stress and increase service quality.

What about jobs and society?

Still, we should not dismiss the fearsome threat of emotional bots replacing human workers. A fair assessment of the issue calls for an historical perspective as well as a look at an adjacent industry. Digitalisation is marked as the fourth industry revolution. Reviewing the past three revolutions, we can see that each technological disruption reduced the need for human labour and increased productivity. At the same time, opportunities emerged for humans to take on more “challenging roles”. For example, Henry Ford’s invention of the production line led to the creation of middle managers and the birth of management theories and business schools.

Similarly, the introduction of emotional AI could significantly shift the nature of work by:

1) providing real-time and high-quality data analytics on individual and collective emotions;

2) assisting managerial decision making to improve workload allocation, employee satisfaction and employee well-being;

3) providing feedback data to improve managers’ behaviours and outcomes.

The above could especially come into play at critical moments such as organisational changes, strategic initiative roll-outs, annual appraisals and exit interviews. The insights generated may be key to increasing the effectiveness of strategy execution.

How can leaders and entrepreneurs take advantage of these developments and, even more importantly, make responsible use of the new tools? The answer is to explore and learn. In partnership with start-ups, leaders could run experiments and design prototypes to discover fresh solutions for organisational challenges. Through the process, leaders could also carefully craft and refine a protocol that addresses privacy and data security concerns and correct for the Hawthorne effect when monitoring employee communications. Leaders could also learn the boundary conditions for using emotion data to make decisions at the organisational, team and individual levels.

Rana Gujral, CEO of Behavioral Signals, summarised this well when I asked him whether emotionally intelligent machines were cause for fear or hope. He said, “I think it's good for humans, because I would rather talk to an emotionally aware machine than emotionally unaware machines, because an emotionally aware machine is probably going to be more ethical…If a machine is very intelligent, but it's emotionally unaware, that's the definition of a psychopath in a machine form, right? And so I think it's a good thing, but also the big thing is we give our fellow humans the tools to be more emotionally aware.”

Chengyi Lin is an Affiliate Professor of Strategy at INSEAD. 

Found this article useful? Subscribe to our weekly newsletter.

Follow INSEAD Knowledge on Twitter and Facebook

Comments
Aishwarya J,

Looking at the current chatbot trends in artificial intelligence, machine learning and natural language processing we can be sure that an automated future is just around the corner. An increase in the amount of attention, education and awareness associated with these fields is a clear indicator of this. The fields of AI, ML and NLP are all closely associated with one another. So, their collective development is imperative to achieve complete and true automation.

Chengyi,

Indeed, Aishwarya. Your observation is correct. All these technologies and many more to come will complement each other and advance machine intelligence.

Anand,

The assumption that machines recognising emotions will be more ethical is immature.
Humans understanding emotions have exploited members of their families, workers are being exploited in corporates, citizens in countries in the name of nationalism, etc.
AI with EI is a great threat. The mind wielding the AI may be a thuggishly programmed AI/EI combination.

Chengyi ,

Thank you Anand for your call for caution. We cannot assume technology will be ethical or not by itself. As Rana and I discussed in the video, the responsibility falls to the users of technology. That is why IBM Chairwoman, President and CEO, Ginni Rometty called for "responsible stewardship" for all technology usage at this year's VivaTechnology in Paris. It is our collective responsibility.

Add a comment Already a member?
We welcome your comments and encourage lively debate. However, to ensure the quality of discussion, our moderators reserve the right not to publish personal attacks, abusive comments or overly promotional content. You can view our Terms & Conditions
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Your Privacy

INSEAD takes your privacy very seriously. For this reason, we inform you that the data collected via the form above is processed electronically for the purpose(s) specified in this form and will not be used outside this framework. In accordance with the Data Protection Act of 6 January 1978 amended by the GDPR, you are granted statutory rights of access, modification, update, deletion and limitation of treatment of your personal data. You may exercise these rights at any time by writing or sending an email to INSEAD at insead.knowledge@insead.edu. You have the right, on legitimate grounds, to object to the collection and processing of your personal information. For more information, please see our privacy policy.