Missing Links In Agentic Coding
By Andrei Roman
Principal Architect, The Foundry
There is a distinct lifecycle to adopting AI in software engineering.
Phase one is the dopamine rush. I wrote about this back in November with "Vibe Coding" Like A Professional. You spin up a brand new project, prompt an LLM, copy-paste a near-perfect snippet, and watch a feature materialize out of thin air. Using agents to build something brand new is WOW. Recently, I built ZapEngine - a WASM + WebGPU game engine - in less than a week using this exact high-velocity workflow.
But then you take that same AI agent, point it at an ossified, 10-year-old enterprise codebase, ask it to add a feature, and the result is... a different kind of WOW. Not good.
The agent has its own assumptions, hallucinates dependencies, breaks encapsulation,rewrites working logic because it doesn't understand the undocumented historical reasons it was written that way in the first place.
Why the massive discrepancy?
It comes down to two things: Context and Tools.
The Shovel and the Excavator
I picked up this (far from perfect) analogy from someone on social media. Just to understand our current situation, imagine a crew of diggers who have spent years perfecting their technique with shovels. Suddenly, someone invents the excavator.
The immediate reaction from the crew is panic, mixed with a stubborn refusal to put the shovel down because "we still have a trench to dig today." But operating an excavator is a completely different skill than swinging a shovel. The point being: The excavator is not going to take your job. But the guy who learned to operate the excavator just might.
Years ago, when compilation took minutes or hours, and someone was getting up for coffee after starting a long make I used to joke with my teams: "What are we? Compiler operators?" (intentionally omitting the thinking part of the workflow)
Today, what are we becoming? AI Operators? (cue giggles)
For the last 20 years, my workflow - and likely yours - was a blended process of Thinking and Editing. We formed a mental model of the system while our fingers physically typed the syntax. The advantage of this was a deep, intimate connection with the logic. The disadvantage was that we got fiercely attached to our code. We treated it like art instead of a utility. And I appreciate that - we derived good practices because of that - even AI knows good practices.
With agentic coding, the AI does the Editing. The code becomes cheap, fast, and almost entirely disposable. You don't get attached to a file when an agent can rewrite it in four seconds.
But you still have to Think and Model. Your judgment gets to shine. You spend your entire day evaluating architecture, enforcing boundaries, and dictating constraints.
The downside? Maxing out your judgment and architectural thinking for eight straight hours is brutally exhausting. You aren't typing anymore; you are continuously directing a brilliant junior developer who works at the speed of light but has severe amnesia (and never learns - until the next model version which always comes with new tricks).
The Junior Developer Equation
Speaking of juniors, there is a lot of bemoaning in the industry right now: "If AI writes all the boilerplate, we can't give juniors the menial tasks anymore! How will they learn?"
Good.
Juniors, you should be thrilled. You no longer have to spend your first two years writing CRUD endpoints and mapping database DTOs. You now get to learn the actual craft of software engineering - you get to see what counts. You get to talk to users more. You get to map domains. You get to be judged on the quality of your thinking, rather than your ability to memorize framework syntax. The baseline has been raised.
The Missing Link: Why Agents Fail at Legacy
Why do they fail at legacy code? Even a project started with AI at breakneck speed eventually reaches a point where PROGRESS is getting SLOW - not because editing is slow but because the opportunity for making mistakes is MULTIPLYING. And any wrong assumption or bad code that gets into context gets treated as "this is OK now" which leads to compounding errors. And no matter how advanced, the AI will make mistakes.
And right now, agentic coding is trying to compress 50 years of management know-how and SDLC lessons, but WITHOUT the actual tooling to support it. Essentially, we are just winging it.
Anthropic just proved my point with their code leak which is making the rounds as I'm typing this. I mean "let's write some tasks in a plan then check them off" is not exactly best practices at scale.
Think about version control. git is a masterpiece because it encapsulates exactly what we learned about concurrent development, branching, and state management. You use git. The AI uses git. It’s a perfect tool.
But what tool do we have for architectural context?
Context is the single biggest lever we have with LLMs. Right now, to keep an agent on track in a legacy system, what do we do? We use Obsidian. We explore the repo manually. We leave markdown files like AGENTS.md and ADR's scattered around as breadcrumbs, hoping the agent's limited context window picks up the scent.
It's ok for now, but it's fragile - it's only a workaround.
The AI has recipes. It knows "good practices." But its understanding of your specific, messy, ossified legacy system is not there.
And grep-ing around doesn't build understanding.
Building the Missing Tooling
If we are going to treat AI Engineering as Engineering, we need better tools.
To bridge the gap between intent and legacy execution, the next generation of developer tools must possess specific traits:
CLI-First: They must be natively callable by the agent.
Clear Output: They must return deterministic, machine-readable facts, not conversational prose.
Economy of Context: They must give the agent exactly the dependency or boundary information it needs to make a change, without blowing up the context window with 50 irrelevant files.
Minimal Friction: They must give meaningful feedback to point in the right direction when things go wrong.
We don't need the AI to "think harder" about legacy code. That only forces it to hallucinate. We need to provide it with a deterministic map of the system's reality.
The tools to map these boundaries for AI agents don't quite exist yet. So, naturally, I'm building.
Welcome to The Foundry.
Discussion (0)
No comments yet. Be the first!