The call came from an engineering manager I'd worked with before. The team had adopted AI coding tools about eight months earlier. Things had gone well initially — velocity was up, everyone was enthusiastic, the tools had delivered what was promised.
But something had started to go wrong. Not dramatically. Gradually. The junior developers were shipping at a pace that looked impressive, but code review cycles were getting longer, not shorter. Senior engineers were spending more time in review than they had been before the tools. Bugs that should have been caught earlier were reaching staging. And a few of the junior developers, when asked to explain their implementation choices, were struggling in ways that hadn't been apparent from their output.
The manager's read: "They're shipping more, but I'm not sure they're learning."
That observation gets at something I think is the central management challenge of the AI tooling moment — not for large technology companies, but for engineering teams at the scale where most companies actually operate.
The asymmetry
AI coding tools amplify the judgment you bring to them. For experienced engineers, this is almost entirely good. They can evaluate the output, catch the subtle errors, ask better questions of the tool, and use it to accelerate work they already understand well.
For engineers who are still building that judgment, the dynamic is more complicated. The tools provide a path to shipping faster, but they shortcut the process through which the underlying understanding typically develops. The debugging, the refactoring, the reading of code that didn't work — these are where the intuition for what's happening inside a system actually forms. When a tool generates working code for you, you skip some of that.
The result, in teams I've seen that haven't adjusted to this, is a productivity asymmetry that looks like success but is accumulating a debt. Junior engineers are shipping more, but the things they're shipping require more senior oversight to be reliable. The senior engineers are getting faster too, but they're also absorbing more review burden. The team looks more productive in aggregate. But the distribution of who is learning — and what — is shifting in ways that will matter later.
What the role of a junior engineer is actually for
I want to be precise about this, because it's easy to read the above as pessimistic about junior engineers or about the tools.
Junior engineers in a well-functioning team serve two purposes: they contribute to the work, and they develop into the senior engineers the team will need in two or three years. Both matter. The tools affect both, but differently.
The contribution side is genuinely improved. A junior developer with good AI tooling can deliver real value faster, work on a wider range of problems, and handle more surface area than before. That's positive.
The development side is where teams need to be more intentional. If the default path for a junior engineer is to generate code and submit it for review, they will miss the learning that used to happen in the process of writing code from scratch. Not because the tools prevent it — they don't — but because the path of least resistance doesn't require it.
What I've seen work
The teams that have navigated this well haven't restricted the tools. They've changed what they ask junior engineers to do with the outputs.
One specific change that I've seen work: requiring junior developers to explain their implementation choices — not just in the PR description, but in a brief conversation. Not "why did you use a for loop here" but "what would happen to this code if the input format changed? What's the failure mode if the upstream service is slow?" These questions can't be answered by reading the generated code — they require understanding it.
Another: separating code generation tasks from code understanding tasks in how work is assigned. Using the tools freely for the former; carving out deliberate time for the latter — reading unfamiliar parts of the codebase, working through debugging without AI assistance, pairing on a problem before using the tool.
The instinct of most teams is to treat the productivity gains as uniform and additive. The reality is that what looks like a junior developer performing at senior velocity is often a senior engineer's review judgment distributed differently — and at some point, that account comes due.
The manager's job has changed
The management challenge here is that output has become a less reliable signal of development. A junior developer who ships a lot of code may or may not be developing the engineering judgment that makes them a reliable contributor at the next level. The traditional proxy — watch the output, track what they build, see how it holds up — is noisier than it used to be.
The managers I've seen handle this well have moved more of their assessment to the understanding questions: Can this person explain the system? Can they reason about what will break and why? Can they debug without the tool? These questions require more time and more direct conversation than reviewing output metrics. But they're the questions that tell you what's actually being learned.
The other thing that's changed is what you're optimising review cycles for. Code review used to be primarily about correctness and quality. It still is — but with AI-generated code, it increasingly needs to include a learning function. Not just "is this correct" but "does the author understand why this is correct."
That's a different kind of review, and it takes longer. The teams that are accounting for this are budgeting differently for review time on junior developer work — not treating it as a fixed cost that should go down as the tools get better, but as an investment in the capability the team needs to build.
If you're managing an engineering team and navigating the AI tooling shift, I'm happy to think through what's working and what isn't — get in touch.