AI Tools May Not Always Speed Up Coding, New Study Shows

News Synopsis
Contrary to the widespread assumption that artificial intelligence (AI) tools boost developer productivity, a recent study suggests otherwise—at least for experienced programmers working within familiar codebases.
METR’s Study: AI Can Slow Down Developers
The nonprofit research organization METR (Model Evaluation and Training Research) conducted a randomized controlled trial to evaluate the actual impact of AI-powered coding assistants on developer performance. The trial involved seasoned open-source contributors using Cursor, a widely adopted AI coding tool.
The results were eye-opening: developers took 19% longer to complete coding tasks while using Cursor compared to when they worked unaided.
“We found that when developers use AI tools, they take 19 per cent longer than without, AI makes them slower,” the researchers wrote.
Perception vs Reality: Why Developers Got It Wrong
Interestingly, developers expected the exact opposite. Before the trial began, they estimated a 24% speed increase with the help of AI. Even after finishing their assignments, they still believed AI helped them work faster—reporting a 20% perceived improvement.
This stark contrast between belief and reality surprised even the study’s lead authors.
“A 2x speed up, somewhat obviously,” was what co-author Nate Rush had predicted. But as the findings proved, the reality was quite different.
Why AI Slowed Things Down
Suggestions Were Only 'Directionally Correct'
Lead researcher Joel Becker explained that AI-generated code suggestions were often not precise enough to be useful:
“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what’s needed.”
As a result, developers had to spend more time reviewing and correcting the code, ultimately slowing their workflow.
A Shift from Past Findings
Previous studies have often portrayed AI as a game-changer in coding:
-
One showed AI helped generate 56% more code
-
Another reported a 26% increase in task volume
However, the METR study adds important nuance, suggesting that these productivity gains may not apply when developers are already familiar with the codebase or are highly skilled.
Will AI Work Better for Junior Developers?
The study’s authors were careful to note that these findings should not be over-generalized. They believe AI tools like Cursor might still prove beneficial in different contexts—particularly for junior developers or in unfamiliar coding environments.
Ease and Enjoyment Still Matter
Despite the slower performance, both the authors and many participants still continue using Cursor. Why?
“Developers have goals other than completing the task as soon as possible,” said Becker. “So they’re going with this less effortful route.”
AI may not always speed things up, but it can make the coding process feel easier or less mentally taxing, which may be valuable in long-term development settings.
What’s Next: Evolving AI and Future Research
This slowdown reflects the state of AI tools as of early 2025. With advancements in AI model training, better prompt design, and smarter code interpretation, tools like Cursor could become faster and more accurate in the future.
METR has also announced its intent to continue conducting similar trials, as AI technologies evolve, to better assess how these tools affect real-world productivity across diverse scenarios.
Conclusion: A Reality Check on AI Coding Tools
The METR study offers a crucial reminder that while artificial intelligence has immense potential to transform software development, it is not a one-size-fits-all solution. The assumption that AI tools universally enhance developer productivity—especially for seasoned coders working in familiar environments—may be overly optimistic.
As seen in the study, even when AI suggestions are “directionally correct,” the time spent reviewing and correcting outputs can offset perceived gains. This doesn’t negate the usefulness of AI tools like Cursor; rather, it underscores the need to refine these technologies further and to understand the contexts in which they are most effective.
Developers may continue using such tools for comfort and ease, but organizations must make data-driven decisions about AI integration in coding workflows. Future improvements in AI prompting and training could bridge current gaps, but until then, developers should view these tools as aids—not automatic accelerators of productivity. Real-world testing remains essential.
You May Like