The water that carries a boat can also sink it, so goes the ancient Chinese saying. Generative AI is today’s proverbial water that can both propel humankind to greater heights and – if we’re not careful – overwhelm us with its power.
One of the ways the technology can inadvertently cause harm is by worsening the climate crisis by way of energy consumption, say INSEAD professors in this INSEAD Explains video series. Another is the erosion and even loss of problem-solving skills and creativity. Then there’s also the societal and legal implications of questionable and inaccurate content.
1. Worsening climate change
Phanish Puranam, Professor of Strategy
GenAI presents two key challenges for businesses. First, the massive energy consumption required to train models like ChatGPT exacerbates the climate crisis. Unless engineers can develop more efficient hardware, this technology may hinder rather than help us achieve environmental goals.
Second, over-reliance on AI may cause us to neglect, even lose, valuable skills like problem-solving and creativity. Businesses must strategically determine which skills to retain and which to outsource to AI, balancing efficiency with the preservation of uniquely human capabilities. These are not just about economics but also identity in an AI-driven world.
2. Weakening higher-level reasoning
Hyunjin Kim, Assistant Professor of Strategy
While AI can automate tasks and enhance decision-making, early evidence suggests it may also impair higher-level reasoning skills. For instance, in financial firms, AI-powered predictions may improve investment decisions, but analysts' ability to explain and justify those decisions might decline.
This poses a challenge for businesses, as the ability to reason and communicate effectively is vital for stakeholder engagement. The key is to strategically integrate AI into workflows, ensuring that the technology enhances human capabilities rather than replace them. This may involve redesigning processes to emphasise human reasoning and explanation, even as AI improves decision-making.
3. Producing harmful content and leaking sensitive information
Theos Evgenious, Professor of Decision Sciences and Technology Management
GenAI’s ability to create vast amounts of questionable or inaccurate content can undermine trust in information sources and complicate efforts to combat misinformation. Harmful GenAI outputs, including hate speech, illegal content or polarising information can also have social and legal repercussions. Additionally, AI-generated content may infringe intellectual property rights or compromise individual privacy.
Information leaks pose another concern. When users input proprietary code or sensitive data into AI models, there's a risk of unintended disclosure. Until we have effective control mechanisms, businesses must carefully consider the information they share with these systems, and prioritise the development and deployment of ethical AI.
Edited by:
Seok Hwai LeeAbout the series
-
View Comments
(2) -
David H. Deans
In contrast, here's a different perspective based on my experience with large enterprise clients. Contrary to popular belief, Critical Thinking skills are not common across an organization. In many cases, few employees apply intellectual curiosity that differs from the status quo 'herd mindset' of their peer group. GenAI tools can be applied effectively to help and encourage people to discover alternative perspectives. Savvy leaders crave this rare skill. So, don't underestimate the upside of human intellectual augmentation. It's much better than institutionalized mediocrity.
-
Leave a Comment
Alex GRIS
09/08/2024, 01.40 am
From a strategic point of view, I'd look at GenAI through 3 scenarios:
- How is the organization prepared and how will it bring value (and with whom / what skills) if AGI is reached within 5 years
- How will the tooling, business intelligence and processes look like if there are incremental improvements to AI in the next 5 years, particularly in the space of step-by-step reasoning, planning and mathematical analysis. In this case there will still be a need for dedicated numerical processing systems and the impact of reasoning and planning will probably be limited to relatively safe and narrower domains.
- How will the applications and business strategy, look like if the capabilities of AI have reached a plateau, with minor improvements in accuracy, numbers of tokens, etc, but still not sufficiently good to do any numeric analysis or any significant reasoning tasks. This probably means that most of the applications will be in the space of summarization, retrieval, chatbots, basically in language processing applications.