DX Coach Get early access

AI Impact Brief

The AI Amplification Effect

What two years of DORA data and five industry reports reveal

+98% / -7.2%

PRs merged vs. delivery stability. Same teams. Same year.

AI is accelerating individual output while organizational outcomes stay flat. The variable that determines which side you land on? The quality of your engineering practices.

6 reports analyzed 10 min read 2024–2025 data
Read the executive summary

2024: The warning shot

When DORA's 2024 Accelerate State of DevOps Report landed, the AI findings surprised everyone. For the first time, AI adoption showed up in the data—and it correlated with worse delivery outcomes, not better.

The high-performing cluster shrank from 31% to 22%. The low-performing cluster grew from 17% to 25%. A 25% increase in AI adoption predicted a 1.5% decrease in delivery throughput and a 7.2% decrease in delivery stability. The only positive AI signals were documentation quality (+7.5%) and perceived code quality (+3.4%).

The mechanism was batch size. As Laura Tacho observed: "AI introduces risk not because of garbage code, but because batch size seems to increase." More code per change means more risk per deployment. And 39.2% of developers reported low or no trust in AI-generated code.

-7.2%
delivery stability
-1.5%
delivery throughput
39%
distrust AI code

2025: The amplification thesis

A year later, DORA didn't just update the data—they rebranded the entire report. The 2025 edition is titled State of AI-Assisted Software Development. That name change tells you where the industry's center of gravity has moved.

The throughput story reversed: task completion up 21%, PRs merged up 98%. But stability stayed negative—bug rates up 9%, PR size up 154%, review time up 91%. The net organizational impact? Flat.

"AI is an amplifier, not a fixer."

— DORA State of AI-Assisted Software Development, 2025

Teams with strong engineering practices saw AI multiply their effectiveness. Teams with weak practices saw AI multiply their dysfunction. The variable wasn't AI adoption—it was the quality of what AI was amplifying.

The 2025 report also introduced a fifth metric—Rework Rate—measuring the percentage of unplanned deployments to fix user-facing bugs. It captures exactly the hidden cost that throughput metrics miss: shipping faster without shipping better.

The industry converges

DORA isn't alone. Five major 2025 reports tell the same story from different angles:

46% distrust AI output

Up from 31%. "Almost right" code is the #1 frustration.

Net gain: zero

AI saves ~10h/week. Org friction loses ~10h/week. Coding is only 16% of dev time.

91% adoption, 3.6h saved
GetDX Q4 2025 135K devs

22% of merged code is AI-authored. "Adoption doesn't equal impact."

+25% commits, +23% PRs

80% of new devs use Copilot in first week. Volume ≠ quality.

Only 6%
McKinsey State of AI 2025 Enterprise survey

of organizations qualify as "AI high performers." The rest see 10–20% cost reductions but can't translate them into sustained delivery improvement.

What it all adds up to

Every report tells the same story: AI is accelerating individual output while organizational outcomes stay flat or degrade. The variable that determines which side you land on is the quality of your engineering practices—the structural health of your codebase, your testing discipline, your review processes, your knowledge distribution.

Free 1-hour DX coaching session

A focused session for engineering leaders who want to understand what's actually happening in their codebase.

Book your session Limited to 25 teams
Unlock the data behind the story
  • Complete data tables from DORA 2024 and 2025
  • Side-by-side comparison of 5 adjacent reports
  • DX Coach capability mapping to industry findings
Limited to 25 teams

Free 1-hour DX coaching session

A focused session for engineering leaders who want to understand what's actually happening in their codebase. Live walkthrough of what DX Coach reveals.

Book your session