Skip to main content
AI chess helper.

Responsibility

How On‑Demand AI Assistance Undermines Learning

How On‑Demand AI Assistance Undermines Learning

Giving learners on‑demand AI assistance can erode practice, productive struggle and long‑term skill growth – even when they know it harms their learning.

Imagine two students training with the same AI tutor. Both receive tips at key moments, but one can additionally request help whenever they want. Who learns more? In theory, giving students greater agency should increase engagement and active participation. However, in practice, without adequate self-regulation, students may become over-reliant on AI assistance – which could ultimately undermine learning.

As AI tutoring tools provide personalised, on-demand support at an unprecedented scale, this has become one of the most fundamental questions in management education, employee training and student learning. I studied this with Hamsa Bastani from The Wharton School and Osbert Bastani from the University of Pennsylvania. We recruited over 200 chess students for a 12-week intensive AI-enabled training programme. Students were randomly assigned to either a system-regulated condition (in which the platform automatically provided AI tips at key moments) or a self-regulated condition (in which students could additionally request help at any time by clicking a button). The only difference between groups was access to this button.

After 12 weeks, those who could request AI help at any time improved their performance by just 30 percent, compared to 64 percent for students in the system-regulated group. Students with on-demand AI access learned less than half as much, a remarkably large effect for such a subtle difference in system design. Even more striking, these students were fully aware of their over-reliance on AI yet continued to increase their use of it over time.

These findings matter far beyond chess. As AI tools proliferate across schools and workplaces, from coding assistants to medical decision-support systems, users have unprecedented control over when they receive help. Our research suggests that even small differences in system design can have dramatic consequences for long-term human skill atrophy.

Why self-regulated AI use hinders learning

While some negative effect from the excessive use of AI assistance is expected, the magnitude demands explanation. Our analysis points to two potential mechanisms that amplified learning loss.

First, on-demand AI assistance short-circuited productive struggle. Students in the self-regulated condition performed significantly better during training games, but this initial success came at the expense of long-term learning. When tested without AI assistance, they performed substantially worse than their counterparts who had struggled through difficult challenges on their own.

The damage wasn't uniform across all types of assistance. Learning losses were driven specifically by students requesting assistance on problems within their “zone of proximal development” – problems that were challenging but feasible for their skill level. These are precisely the problems for which struggle, error and targeted feedback produce the greatest learning gains. 

By ​​bypassing this process with on-demand solutions, students deprived themselves of the very experiences that build expertise. Interestingly, requesting help on trivial problems (below their skill level) or highly complex ones (far beyond their abilities) had little impact on learning. The damage occurred specifically when AI assistance displaced productive struggle on appropriately challenging problems.

Second, self-regulation reduced overall engagement. Students with on-demand access completed 24 percent fewer training games than their system-regulated peers. Our post-study surveys revealed why: Many reported that clicking the help button diminished their sense of accomplishment and made training feel less rewarding.

Yet, despite this awareness, these same students increasingly relied on AI over time. In the first week of training, they requested help approximately five times per game. By week 12, this had more than doubled to 11 requests per game. Meanwhile, students in the system-regulated condition steadily improved, eventually closing the performance gap with their self-regulated peers.

This pattern reveals a classic failure of self-regulation: short-term convenience overrides long-term goals, even when the trade-off is fully understood. Our chess students weren't naive about AI's risks; they were caught in an agency trap where immediate ease consistently won over future learning.

When motivation matters (and when it doesn't)

We explored whether student skill or motivation might moderate these effects. Common wisdom suggests that more skilled or motivated learners would be better equipped to self-regulate their use of AI assistance. Our findings tell a more nuanced story.

Highly motivated students, those who reported spending more hours per week on chess prior to the study, experienced substantially smaller learning losses from self-regulated AI access. However, even among the most motivated students, on-demand assistance still reduced learning compared to system-regulated assistance. Skill or expertise, however, offers no such protection. Beginners and advanced players both fell into the over-reliance trap. This finding contradicts the prevailing view that expertise enables better self-regulated learning.

The design implications are clear. Organisations deploying AI‑assisted learning and training systems should:

  • Resist the urge to give users unlimited control. The intuition that more choice enables better learning can be wrong. Educational AI systems should algorithmically determine when to help based on what best supports learning by targeting moments when assistance accelerates learning rather than displaces it.
  • Recognise that user awareness is not enough. Our students knew they were over-relying on AI but couldn’t help themselves. System design must account for the gap between intentions and behaviour. Don’t expect users to self-regulate effectively, even when they understand the risks.
  • Consider the types of assistance provided. Our research suggests that what we call attention signals – alerts that flag important decisions without prescribing solutions – can encourage engagement without triggering over-reliance. These signals prompt learners to slow down and think carefully at critical moments while preserving the productive struggle essential for learning.
  • Monitor engagement metrics alongside performance. In our study, students with on-demand AI access didn't just learn less, they also practiced less. Reduced engagement may be an early warning sign that assistance is undermining rather than supporting learning. Besides tracking when users are improving during training, track whether they remain motivated to train at all.

Using AI doesn’t inevitably lead to skill atrophy. The risk arises specifically when assistance displaces productive struggle on appropriately challenging problems. By limiting AI assistance there, but allowing it freely elsewhere, productivity gains need not come at the cost of skill development.

Beyond the classroom

AI assistance, whether in software engineering or medicine, presents a double-edged sword for skill development. While it can help streamline workflows and free up cognitive resources, an over-reliance on AI for complex problem-solving is detrimental. If junior staffers habitually defer to AI-generated solutions for challenging cases, they risk failing to cultivate the fundamental reasoning skills required when these systems encounter unexpected problems or fail.

As we integrate AI more deeply into education and training, we must design these systems with their long-term effects on human capability in mind. The chess students in our study were diligent learners with strong motivation to improve, training in a domain where AI assistance has been available for decades. If even these learners fall into the trap of over-reliance, the risks are likely far greater in contexts where motivation is lower, or users are less aware of AI's limitations.

Edited by:

Verity Ashton

About the author(s)

Related Tags

Artificial intelligence
View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.