Skip to main content
Four brown pieces of puzzle stand on wooden table isolated on gray or white background

Strategy

Four Trust Types That Make or Break AI Projects

Four Trust Types That Make or Break AI Projects

The success of organisational AI initiatives depends on whether employees trust it with both their heads and their hearts.

Companies invest heavily in artificial intelligence, yet as many as about 80 percent of AI projects fail. Why? Natalia Vuori (one of this article’s co-authors) and her colleagues discovered that success hinges not just on the technology, but on something not easily quantifiable: various forms of trust. In other words, what employees think and feel about AI’s capability. This matters even in organisations with strong cultures of trust.

The researchers conducted an in-depth study at "TechCo", a Scandinavian software firm of about 600 employees implementing an AI-powered knowledge-mapping tool. This tool collected data from employees' digital activities to create a visual expertise map showing who knew what across the organisation.

Through interviewing TechCo’s managers and employees (on condition of anonymity) and analysing data on usage of the AI tool, the researchers uncovered four forms of trust among employees. Each of these forms of trust lead to different behaviours that could directly impact the success or failure of AI initiatives in organisations.

The study, now published in the Journal of Management Studies, points to actions through which leaders can foster various forms of trust in AI, as we explain later in the article.

The four forms of trust

1. Full trust: High cognitive and emotional trust 

Employees with full trust believed in the AI tool's capabilities (cognitive trust) while feeling comfortable with the technology (emotional trust). As one manager said: "I perfectly understand the logic [behind the tool], if you improve the way you're harnessing the knowledge and insights of people, if you make it easier for people to find each other, that makes perfect sense."

A people-centric approach that acknowledges both thinking and feeling dimensions of trust is essential.

These employees saw strategic applications beyond the tool's basic functions. One of them noted: "You can observe who you're collaborating with, as well as who you're not collaborating with... you can consider your own conduct and determine what kind of individuals you need to work with."

Emotionally, these employees felt positive about AI: "I think it's where the world is going, and for me... if I'm working now and I'm being paid, why shouldn't it be transparent?" Significantly, employees with full trust didn't change their digital behaviours, providing the AI with accurate data needed for optimal performance.

2. Uncomfortable trust: High cognitive trust and low emotional trust

The second form of trust involved employees who recognised the tool's value but worried about its implications. One manager said: "That's a wonderful idea that you would somehow be able to figure out who would be the best expert for this... But at the same time, just when you may have started with the positive potentials, you may not have noticed these negative potentials."

Many feared the potential misuse of data: "There is always the worry that those data will be used for something else I don't have any control over... For example, against us. Now, it's focused on people, not from the management side. But I guess companies want to be more efficient, and – well, there's a fine line."

To handle this cognitive-emotional conflict, these employees began to be wary of their digital footprints. They limited the information visible to the AI by marking calendar events as private or using generic descriptions. 

3. Blind trust: Low cognitive trust and high emotional trust

Some employees questioned the AI tool's competence while still feeling comfortable with it. As one interviewee said: "I sometimes feel like it is not tracking the amount of time I've spent on either technology properly." Another said the map generated by the tool did not accurately reflect the expertise of some colleagues. "It was hard to find the person with actual knowledge."

Despite these concerns, they didn't feel threatened by the technology: "I am not concerned about sharing information because I know that the information... is information that generally could benefit other people to find as well."

Interestingly, these employees responded by detailing their digital footprints. They added more information to their calendars, project entries and online discussions to help improve the tool's performance. As one employee explained: "Let's take a step forward and provide the necessary details to make our tool more efficient."

4. Full distrust: Low cognitive and emotional trust 

Employees with full distrust neither believed in the tool's capabilities nor felt comfortable with it. They described negative experiences ("I tried using [the tool], and nothing worked at all") and questioned its fundamental approach ("We shouldn't trust only data or digital services to make decisions").

These employees also experienced negative emotions, particularly fear. One confided: "I feel that it is dangerous. My fear is that it may be the misuse of data. They [the collected data] are used against you in some cases."

Their responses were the most damaging to the AI system – either withdrawing their data entirely ("I just opt out") or actively manipulating their digital footprints by using certain keywords to shape how they appeared in the system.

These behaviours created a vicious cycle. When employees withdrew, confined or manipulated their digital footprints, the AI received imbalanced or inaccurate data, decreasing its performance. As one interviewee noted: "Some experts disappeared from the visual map."

Lower performance reduced trust further, leading to decreased usage until eventually, the project failed.

How to make your AI initiative stick

If there’s one key insight from the study, it is that a people-centric approach that acknowledges both thinking and feeling dimensions of trust is essential. Trust is not just a monolithic, one-size-fits-all concept.

For starters, leaders introducing an AI tool to the workplace should provide comprehensive training that explains how AI works, its capabilities and its limitations. Such efforts build cognitive trust. Leaders should also develop and communicate clear AI policies that define what data will be collected and how it will be used. This helps employees understand the tool’s role, what it’s capable of and, as importantly, how employees’ concerns and personal protection will be addressed. When people feel at ease, they are more likely to form emotional trust.

This brings us to the point of managing expectations about AI performance. Managers should encourage patience during early stages when results may be inconsistent. Celebrate AI-driven achievements or improvements to demonstrate progress and reinforce the value of the AI initiative.

The study also shows that leaders must address feelings, not just facts. Share your own enthusiasm about AI's potential benefits. Create psychological safety by encouraging the open expression of concerns about AI. Address anxieties with empathy rather than dismissal. When employees feel their emotions are acknowledged, they're more likely to develop positive connections with new technologies.

Remember: True AI transformation starts not with algorithms, but with a sophisticated understanding of various forms of trust and fostering them as part of your AI initiative.

Edited by:

Seok Hwai Lee

About the author(s)

About the research

Related Tags

Artificial intelligence

About the series

AI: Disruption and Adaptation
Summary
Delve deeper into how artificial intelligence is disrupting and enhancing sectors – including business consulting, education and the media – and learn more about the associated regulatory and ethical issues.
View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.