There is a lot of noise right now about AI agents. Most of it focuses on what agents can theoretically do, with very little attention paid to the question of whether they actually do it well, and with appropriate guardrails, in practice.
GitHub Copilot Modernisation is a useful counterpoint to that noise. It is an agentic tool designed to help teams upgrade legacy .NET applications to modern framework versions and migrate them to Azure. Microsoft published part two of a series on it this week, walking through what the tool handles, what it does not, and how the workflow is structured.
The workflow itself is worth paying attention to. The agent follows an assess, plan, execute sequence. It produces an assessment report, generates a migration plan in a Markdown file you can inspect and edit before committing, then breaks execution into discrete tasks with validation criteria and Git commits at each stage. If something fails, it tries to identify and fix the cause rather than stopping. You are reviewing the work throughout, not just approving it at the end.
That is a meaningful structural choice. Most of the AI agent discourse assumes that autonomy is the goal and oversight is a compromise. This inverts that assumption: the human-in-the-loop checkpoints are load-bearing, not cosmetic. They exist because the agent cannot guarantee it will always pick the best migration path, and Microsoft says so plainly in the article.
The tool is genuinely good at the mechanical, repetitive parts of a .NET migration: updating target frameworks in project files, upgrading NuGet packages, swapping deprecated APIs for modern equivalents, replacing hardcoded credentials with managed identity, moving file I/O to Azure Blob Storage, and updating authentication from on-premises Active Directory to Microsoft Entra ID. These are structured transformations with predictable before-and-after patterns, and an agent is well suited to them.
Where it does not go — and where teams should go in knowing this — is anywhere that requires judgement beyond code-level transformation. It will not handle IIS-specific web.config settings that have no direct equivalent in Kestrel. It will not set up CI/CD pipelines or provision infrastructure. It will not rewrite complex Entity Framework migrations with hand-tuned SQL. Web Forms migration is listed as underway, which is notable given how much legacy estate still sits on Web Forms in Financial Services and public sector organisations.
One thing the article does not address, which is worth flagging: the agent has no memory between sessions. If you correct its approach on one project, that learning does not carry forward to the next. You can encode standards using custom skills, but that is a separate setup exercise, and the article does not explore it. For anyone modernising a large estate across multiple teams, that statefulness gap is a real operational consideration.
This is part two of a series, so some of those gaps may be covered elsewhere. But the broader point stands: this is an honest and well-structured description of what an AI agent should look like in a production context. Bounded scope, transparent reasoning, human checkpoints, and an explicit acknowledgement of where it falls short. That combination is rarer than it should be.
Source: Explaining what GitHub Copilot Modernization can (and cannot do)




