Six agents have done their work. You have a discovery report, a domain decomposition, a target architecture, a security audit, an observability design, and a cost model. Thousands of lines of structured, evidence-based analysis.
Now what?
This is where most modernization assessments stall. The analysis sits in a shared drive. Someone schedules a planning meeting. The meeting produces action items. The action items produce more meetings. Months pass before anyone writes a line of code.
The migration strategist — our final agent — exists to eliminate that gap. It reads all six reports and produces something the previous agents don't: implementation-ready artifacts. PRDs, user stories, and structured prompts designed to be consumed directly by agentic coding assistants like Claude Code or Kiro.
The output of this system isn't a report that humans read and then manually translate into work. It's a specification that AI coding agents can pick up and start building from.
The synthesis
The strategist reads everything:
- 01-discovery.md — Technology stack, 7 R's classification, code quality signals
- 02-decomposition.md — Bounded contexts, service map, Strangler Fig plan, extraction order
- 03-architecture.md — Compute decisions, data architecture, network design, CI/CD
- 03b-observability.md — SLOs, monitoring, tracing, chaos engineering, DR
- 04-security.md — Vulnerability audit, IAM architecture, encryption, detective controls
- 05-cost.md — Monthly costs, TCO model, licensing savings, testing strategy
From these, it produces three categories of output.
Output 1: The PRD
The strategist generates a Product Requirements Document for the overall modernization — not a generic template, but a PRD grounded in the specific findings from the analysis.
The PRD structure follows the bounded contexts from the decomposition. If you're using Kiro (an AI-native IDE that supports spec-driven development), these map directly to spec files:
specs/
00-foundation.md ← Landing zone, VPC, CI/CD pipeline, security baseline
01-identity-service.md ← Auth migration to Cognito, user directory, RBAC
02-content-service.md ← Core domain extraction, ORM migration, API design
03-media-service.md ← S3 integration, image processing, CDN
04-navigation-service.md ← URL routing, site tree, friendly URLs
05-data-migration.md ← DMS parallel run, schema conversion, validation
06-observability.md ← CloudWatch, X-Ray, alarms, dashboards
...
Each spec carries everything a developer — or an AI coding agent — needs to start building. Each spec file contains:
- Context — what the discovery and decomposition reports found about this bounded context
- Requirements — functional and non-functional, derived from the architecture and security reports
- Acceptance criteria — measurable outcomes derived from the SLO definitions (Part 5) and testing strategy. For example, "Content API responds within p99 < 500ms under load" comes from the observability report's post-stabilization SLOs, and "all CRUD operations pass contract tests" comes from the testing pyramid's contract layer.
- Technical constraints — specific AWS services, CDK constructs, security controls from the architecture
- Dependencies — which other specs must be completed first (from the extraction order)
- Testing requirements — which layers of the testing pyramid (Part 5) apply to this spec, with specific tools and coverage targets
- Data migration notes — schema changes, parallel run requirements, validation criteria
The PRD isn't aspirational. Every requirement traces to a specific finding in the analysis. "The identity service must support transparent user migration on first login" traces to the security report's finding that legacy password hashes can't be imported to Cognito, which traces to the discovery report's finding of bcrypt/scrypt JARs in the security package.
Output 2: User stories with implementation context
For each bounded context, the strategist generates user stories that carry the analysis context forward:
### Story: Replace custom S3 client with AWS SDK
**As a** platform engineer
**I want to** replace the legacy HTTP-based S3 client with the AWS SDK
**So that** we eliminate custom HMAC signing, remove hardcoded access keys,
and use IAM task roles for authentication
**Context from analysis:**
- Discovery: s3.cfc is 1,375 lines of custom cfhttp-based S3 client
- Security: accessKeyId and awsSecretKey are hardcoded in config (P0 finding)
- Architecture: Media Service on Lambda with IAM execution role, no credentials needed
- Cost: Eliminates credential rotation overhead
**Acceptance criteria:**
- [ ] All S3 operations use AWS SDK v3
- [ ] No access keys in source code, config, or environment variables
- [ ] IAM task role provides S3 read/write to media bucket only
- [ ] Existing upload/download/delete operations produce identical results
- [ ] CloudFront distribution serves media assets with OAC
These aren't generic user stories. They carry forward the specific evidence, file paths, and architectural decisions from the analysis. Engineers need the why (rationale, tradeoffs, constraints) to make good implementation decisions. AI coding agents need the what (specific targets, acceptance criteria, CDK constructs) to generate correct code. These stories serve both consumers by carrying forward both dimensions from the analysis.
Output 3: Implementation prompts for AI coding agents
This is the bridge between analysis and execution. The strategist generates structured prompts designed to be fed directly into Claude Code, Kiro, or similar agentic coding assistants:
### Prompt: Scaffold CDK stack for Identity Service
You are implementing the Identity Service for a legacy modernization.
**Architecture context:**
- Service: Identity Service (bounded context: Identity & Access Management)
- Compute: Cognito User Pool + Lambda adapter
- CDK construct: aws_cognito.UserPool
- Auth flow: Cognito InitiateAuth with SRP
- Migration: UserMigration_Authentication Lambda trigger
- MFA: Required for admin users, optional for end users
- Custom attributes: custom:legacy_role, custom:legacy_groups
**Security requirements (from 04-security.md):**
- Password policy: minimum 12 characters, uppercase, lowercase, numbers, special
- Advanced security features: compromised credential detection enabled
- Lambda triggers: pre-authentication (audit logging), post-authentication (session logging)
**From discovery (01-discovery.md):**
- Legacy auth: 10 CFCs in packages/security/, ~4,500 LOC
- Password hashing: jBCrypt 0.3m + scrypt 1.3.1
- Session-based auth: session.security.userid
- Role hierarchy: farUser → farGroup → farRole → farPermission
**Generate:**
1. CDK stack with Cognito User Pool, Lambda triggers, and IAM roles
2. Migration Lambda that authenticates against legacy DB on first login
3. Lambda authorizer for API Gateway that validates JWT and checks roles
4. Unit tests for the migration Lambda and authorizer
Each prompt includes the relevant context from every agent report. The coding assistant doesn't need to re-discover the architecture or re-read the security requirements. It has everything it needs to generate working infrastructure code.
The PRD as a dependency graph
The strategist doesn't just produce a flat list of specs. It produces a dependency graph that mirrors the extraction order from the decomposition:
Foundation (VPC, CI/CD, security baseline)
├── Identity Service (Cognito, auth migration)
├── Media Service (S3 SDK, image processing)
├── Email Service (SES integration)
└── Cache Service (ElastiCache Redis)
├── Content Service (core domain, ORM migration)
├── Navigation Service (URL routing, site tree)
└── Data Migration (DMS parallel run)
├── Workflow Service (Step Functions, SQS)
└── Admin Service (webtop rebuild)
This dependency graph tells the coding assistant — or the engineering team — what to build in what order. Foundation first. Then the services with no inbound dependencies. Then the core domain. Then the services that depend on everything else.
The Well-Architected validation
Before producing the PRD, the strategist checks the overall plan against the AWS Well-Architected Framework. It assesses all six pillars using evidence from the prior reports and recommends remediation where gaps exist:
| Pillar | What it checks |
|---|---|
| Operational Excellence | CI/CD maturity, test coverage, monitoring, runbooks |
| Security | Vulnerability findings, credential management, network isolation |
| Reliability | Multi-AZ design, auto-scaling, fault isolation, DR strategy |
| Performance Efficiency | Compute right-sizing, caching, CDN, async patterns |
| Cost Optimization | Licensing analysis, Savings Plans, optimization paths |
| Sustainability | Serverless adoption, Graviton usage, managed services |
Each pillar is scored green (meets best practices), yellow (partial gaps, manageable risk), or red (significant gaps requiring remediation before proceeding). If any pillar scores yellow or red, the strategist flags the specific gaps and recommends remediation tasks for inclusion in the PRD.
A human architect reviews these recommendations — the agent identifies the gaps, but the team decides which remediation tasks to prioritize and how to sequence them. This review should happen within the first week of receiving the report, before implementation begins.
The risk register
The strategist produces an evidence-based risk register that accompanies the PRD. Every risk ties to a specific finding:
| Risk Pattern | Evidence Source | Mitigation |
|---|---|---|
| ORM data type mapping errors | Discovery (ORM analysis) | Schema conversion assessment + parallel run validation |
| SQL injection surface area | Discovery (query count) | WAF rules (immediate) + prioritized audit |
| Session loss during containerization | Discovery (state management) | Externalize sessions to ElastiCache Redis or DynamoDB before first container deployment |
| Credential exposure during transition | Security (credential audit) | Secrets Manager migration as P0 |
| Identity migration friction | Security (password hash analysis) | Migration Lambda trigger for transparent cutover + parallel legacy auth during transition as rollback path |
Each risk has a mitigation strategy and a rollback plan. These become acceptance criteria in the relevant PRD specs — the risk isn't just documented, it's addressed in the implementation plan.
A note on living documents: implementation will inevitably reveal design gaps that the analysis couldn't predict — edge cases in data migration, performance characteristics that differ from estimates, integration behaviors that only surface under real traffic. The PRDs are starting points, not contracts. Teams should update specs as implementation proceeds, feeding discoveries back into the analysis chain.
The handoff to agentic coding
This is where the analysis pipeline ends and the implementation pipeline begins.
The output of our system — PRDs, user stories, implementation prompts — is designed to be the input for the next system: agentic coding assistants that actually write the infrastructure code, service implementations, tests, and migration scripts.
The analysis agents read legacy code and produce specifications. The coding agents read specifications and produce modern code. The analysis is the bridge between the old world and the new one.
We're building that second system — an AI-driven development lifecycle (DLC) that takes these PRDs and executes the transformation. Agentic coding assistants handle the entire lifecycle: CDK scaffolding, service implementation, test generation, data migration scripts, deployment automation, and validation.
That's the next series.
What we've built
Looking back at the full system:
Six specialized agents — scanner, decomposer, architect, security architect, observability architect, cost analyzer — each an expert in one phase of modernization analysis.
Document-based communication — agents read and write markdown files. Human-readable, independently re-runnable, fully traceable.
Conditional MCP tool usage — query AWS documentation and pricing when needed, skip when training knowledge suffices.
Evidence-based output — every recommendation traces to specific code findings. No generic advice, no hand-waving.
Implementation-ready artifacts — the final output isn't a report to be read. It's PRDs, user stories, and prompts to be executed by the next generation of AI coding agents.
The analysis takes the legacy codebase from "we don't know what we have" to "here's exactly what we have, here's the target architecture, here's the security posture, here's the cost, and here are the implementation specs ready for AI-assisted development."
That's the transformation: from tribal knowledge to executable specifications.
One command. Six agents. A complete modernization blueprint — ready for the machines that will build it.
Your legacy, your journey
We've walked through three applications in this series — a ColdFusion CMS, a Java e-commerce platform, and a .NET enterprise portal. Each one told a different story. The ColdFusion app needed deep ORM migration work. The Java platform needed to go backward (consolidate) before going forward (extract). The .NET portal's challenge was platform dependencies, not architecture.
Your application will tell its own story. The agents, the wave structure, the output format — all of it is designed to adapt to what the scanner finds in your codebase. ColdFusion, Java, .NET, COBOL, Node, Python — the approach is the same, but the outputs are entirely shaped by your code.
The skill definition, the agent prompts, and the orchestration are all open and configurable. You can adjust the agent expertise for your industry (healthcare compliance, financial services regulations, government security requirements). You can add custom patterns to the scanner for your framework. You can tune the cost model for your region and your pricing agreements. And while this series targets AWS, the framework adapts to any cloud provider — Azure, Google Cloud, Oracle Cloud. The discovery, decomposition, and migration strategy agents are cloud-agnostic; only the solution architect and cost analyzer carry provider-specific knowledge, and those can be swapped for your target CSP. The tools are designed to be tailored — not one-size-fits-all, but a starting framework you make your own.
If you're sitting on a legacy codebase wondering where to start, point the scanner at it. The worst case is you get a 500-line discovery report that tells you things about your own code you didn't know. The best case is a complete modernization blueprint, ready for implementation.
We'd love to help you get started. Whether it's running the analysis against your codebase, customizing the agents for your stack and compliance requirements, or building the implementation pipeline for your specific migration — reach out. Every legacy modernization starts with the same step: understanding what you actually have.
This concludes the analysis series. The next series — AI-Driven Development Lifecycle — will cover taking these PRDs and implementation prompts and using agentic coding assistants to execute the actual cloud transformation.