Skip to main content
age of algorithms code

Responsibility

Responsibility in the Age of Algorithms

Responsibility in the Age of Algorithms

How the digital economy is changing the way companies approach corporate responsibility.

From biotech and blockchain to crypto and chatbots, digital transformation is fundamentally reshaping every aspect of business. The area of corporate responsibility is no exception. These seismic technological shifts are now challenging previous boundaries when it comes to companies relationships with and responsibilities to their stakeholders. 

These factors – which include the growth of the sharing economy (through the rise of organisations like Airbnb and Uber), increased personalisation of digital marketing, greater transparency in supply chains, and the pervasive use of AI in hiring, employee monitoring and autonomous processes – have impacted how companies interact with customers, employees and other stakeholders. This in turn can alter a firm’s responsibilities to those stakeholders, in good, bad and currently unknown or unclear ways.

Our recent article published in California Management Review explores what these potential ethical shifts mean for business leadersHow have issues around responsibility intensified, altered, or in some cases been newly created thanks to this brave new digital world? 

Intensify and alter

In many situations, what a company is responsible for has not necessarily changed. Issues like protecting users’ data privacy or managing a company’s energy consumption have always been important responsibilities assigned to firms. What the digital transformation has done is simply intensify these responsibilities. 

Consider that significant cybercrime, like the recent release of Qantas account holders’ data on the dark web, has risen by 50 percent in the last year, and it’s easy to see why global hacker consortiums keep CEOs awake at night. Then there’s the fact that energy consumption by data centres is set to more than double by 2030

The digital economy is also shifting the goalposts when it comes to managing stakeholders. The rise of social media, along with greater access to information online, has tilted the scales towards those who previously had no voice. Activist groups, communities and even individuals are now able to make their grievances heard – and, as a result, companies now need to listen to what they have to say. 

New stakeholder groups might not be limited to human activists either. As robots and AI continue to evolve, they could also emerge as stakeholders whose rights must be considered. It might seem like science fiction, but researchers are already exploring robots’ rights. In fact, a number of places in the United States already have legislation in place explicitly granting the same rights and responsibilities to autonomous delivery robots as human pedestrians

Who is responsible?

As the digital revolution has blurred the lines between companies and individuals, it is perhaps the boundaries of who is responsible for what that have been most seriously questioned. The rise of the sharing economy is one clear case of this in practice. Take the home-stay experience platform Airbnb, where individuals are able to engage with the company as both a consumer and a service provider. Although there are clear guidelines in place, who should take the blame if a user has an accident while staying in a holiday home booked through the firm’s app? Or, on a broader societal level, what is Airbnb's responsibility when landlords use the platform to rent out apartments to tourists, pricing locals out of popular neighbourhoods in cities like Barcelona and New York?

Thanks to the growth of digital networks and the development of blockchain technology, multi-sided markets have created increasingly complex digital ecosystems made up of numerous collaborating actors. This makes it increasingly challenging to pinpoint exactly who is responsible when individual failures occur, as it stops companies from behaving as independent actors. Instead, firms find themselves reliant on others in the ecosystem to ensure that the end user gets the product, experience or service they expect. 

This is exemplified by the rise of super apps like Grab, which hails from Southeast Asia. Originally focused on transport, it’s now expanded to provide delivery, travel, insurance and financial services. For all of these offerings, it’s reliant on other actors to deliver on the service promised, whether that’s a third-party restaurant, taxi driver, travel agent or financial advisor. At what stage of the interaction can Grab realisitically pass on the responsibility for poor services or products to these third-party agents? 

In a similar vein, there’s been much debate about the accountability of social media companies like Meta, Alphabet and ByteDance for the third-party content they host. How far should Facebook, TikTok or X be expected to monitor and control the content that appears on their platforms, and how much responsibility lies with the individual users? Can companies really claim to be neutral intermediaries when their platforms have been shown to distribute hate speech, promote human trafficking and even make ransom demands?

The responsibility gap

The waters around this debate have been further muddied with the recent explosion of generative AI and the deployment of AI chatbots. Who is really responsible for how a chatbot responds to queries on everything from travel advice to mental health tips? At their most benign, they can share inaccurate or false information. At their worse they can pose a real danger. As an extreme example, take the ongoing lawsuit against OpenAI, with its generative AI chatbot ChatGPT accused of contributing to a teenager's suicide.

The increasing roles of non-humans – be they robots or algorithms – in the processes and actions of a firm has led to a “responsibility gap”. The actions of such machines challenge assumptions about moral responsibility being down to humans and their free will. Tesla is a prominent case of this in practice, having faced multiple legal claims relating to incidents involving its semi-autonomous vehicles. But where exactly to apportion responsibility for these accidents – the human driver, the semi-autonomous car or the company – is not always clear.

Amid these developments, it’s important that managers don’t adopt a strategy of deflection. They should avoid hiding behind the technology and be proactive about establishing clear boundaries of responsibilities in any negotiations and contracts with their suppliers and clients. That also requires managers to follow regulatory developments closely. For example, the self-driving robotaxi company Waymo is set to become liable for driving violations by their autonomous vehicles in San Francisco from next July. Organisations need to anticipate such legislative changes and understand that their responsibilities may alter, often as governments and wider society try to keep pace with the impact of technological change.

The digital revolution means that business leaders must shift their thinking in a wide range of areas. The field of corporate responsibility is no different. Managers must be ready to update their basic assumptions about a company’s responsibility for who, what and when. To ensure they stay relevant, they need to break down organisational silos, understand regulatory change and stay up to date with potential opportunities and challenges. What none of us can afford to do is to cede responsibility by hiding behind the technology. 

Edited by:

Nick Measures

About the author(s)

About the research

"Corporate Responsibility Meets the Digital Economy" is published in the California Management Review

Related Tags

Artificial intelligence
View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.