Walk into many mid-sized businesses in healthcare, manufacturing, legal, or local government, and there’s a reasonable chance their most critical operational software was built between 1995 and 2010. The original developers are gone. Documentation is sparse or nonexistent. The system runs on an operating system that hasn’t received security updates in years.
Nobody planned for this. It’s the accumulated result of rational decisions made one at a time: the system works, the replacement costs are enormous, and the risk of disrupting something critical outweighs the discomfort of running something old. So the decision gets deferred. Again and again. Until the technical debt becomes a genuine liability.
This is the legacy software problem. It’s larger and more pervasive than most people outside the industry appreciate — and it has resisted every solution the industry has offered for the better part of two decades.
Why the Traditional Options Are All Bad
The problem has persisted not because organizations are complacent, but because the available options are genuinely poor.
Custom rebuild — typically $300K–$2M and twelve to twenty-four months — faces a fundamental challenge beyond cost and time. Critical systems can’t simply be taken offline and swapped out. The existing system has to keep running while the replacement is built, which means parallel operation, complex data migration, and extended periods where two systems are being maintained simultaneously. And when the rebuild is done, the organization has a new custom system that will itself eventually become legacy.
Migration to SaaS — cheaper upfront, but structurally difficult. Off-the-shelf software is designed for the average case; legacy systems are the opposite — they’ve been customized over years to fit specific workflows that exist nowhere else. Forcing those workflows into a generic SaaS product means the organization adapts to the software rather than the software serving the organization. The productivity loss during transition is real, and some of what gets lost in migration never comes back.
Staying put — the most common choice, and not an irrational one given the alternatives. The risk is that “staying put” is a strategy that works until it doesn’t, and the failure mode can be severe: a security breach, an OS-level incompatibility, or the departure of the last person who understood how the system actually works.
None of these options are good. The question is whether OSA platforms offer something genuinely different, or whether they’re repackaging existing capabilities in new language.
What OSA Platforms Claim to Offer
The argument from OSA proponents breaks the legacy modernization problem into two steps.
The first is reverse engineering: before anything gets rebuilt, an OSA platform analyzes the existing system — its behavior, data structures, edge cases, and business logic — and produces structured documentation. The goal is to capture what the system actually does, including all the institutional knowledge that exists only in the software itself, before attempting to replace it.
The second is rebuilding: with that documentation as input, the platform rebuilds the system as a modern application — not a generic replacement, but one designed to mirror the original system’s logic while updating the underlying architecture.
If this works as described, it addresses the most fundamental challenge in legacy modernization: the knowledge problem. The reason rebuilds are so risky is that so much of what a legacy system does is implicit — embedded in code that nobody has read in years, handling edge cases that nobody remembers exist. Capturing that before rebuilding changes the risk profile of the project significantly.
The Questions Worth Asking
Before accepting this framing, several things deserve scrutiny.
How complete is the reverse engineering in practice? Analyzing existing system behavior is harder than it sounds. Systems that have been in production for twenty years accumulate behavior that wasn’t designed — bugs that became features, workarounds that became standard process, integrations that aren’t documented anywhere. An automated analysis will capture what it can observe; what it misses may be exactly the edge cases that matter most.
What’s the actual human involvement required? No honest account of AI-assisted rebuilding claims zero human involvement. The real question is how much domain expertise is required, where in the process it’s needed, and what happens when that expertise isn’t available. Legacy systems in specialized industries — healthcare billing, manufacturing control systems, legal case management — involve domain knowledge that’s genuinely hard to encode.
How does the rebuilt system get validated? Proving that a new system behaves identically to an old one under all conditions is a hard problem. The test suite that would validate this has to be built, and building it requires understanding the original system well enough to know what to test. That’s circular in ways that matter.
What does the ongoing maintenance model look like? Legacy software became legacy partly because the organizations running it didn’t have good options for ongoing maintenance. A rebuilt system on a modern platform improves this situation significantly — but the platform relationship now becomes a dependency. What happens if the OSA vendor changes pricing, changes direction, or ceases to exist?
An Honest Assessment
The legacy modernization use case is one of the more credible applications for OSA-style platforms, precisely because the problem is so well-defined and the alternatives are so poor. The knowledge problem — capturing what an undocumented system actually does — is exactly the kind of task where AI-assisted analysis might provide genuine value that’s difficult to achieve otherwise.
But “might provide genuine value” and “delivers reliably at production quality” are different claims, and the evidence for the latter is still early. The most honest position is that this is a promising application area that deserves careful, evidence-based evaluation — not blanket skepticism, but not uncritical adoption either.
The organizations best positioned to evaluate this objectively are those with a specific legacy modernization problem in hand, a realistic understanding of what the alternatives cost, and the capacity to run a properly scoped pilot before committing to a full rebuild.
Subscribe below for further writing on OSA — what the evidence shows and where the open questions remain.
Follow the Thinking
Subscribe for occasional writing on the OSA category — what the claims are, what the evidence shows, and what the open questions are.
Subscribe