An AI-enhanced FDLC at Nintex — a single, shared view of where we are, what we're changing, and how each role moves with it.
Before any conversation about workflow, tooling, or maturity arcs — these are the things we are not negotiating. Everything that follows in this document serves them.
If a new practice does not move one of these, it should not survive the shift.
The case for change isn't "AI is exciting." It's structural — three pressures that have built up in how we build software, all of which compound.
At a roughly 1:16 PM and designer-to-engineer ratio, handoff quality is the primary throughput lever. Designers cannot absorb rework at scale. Engineering cannot afford mid-build clarification loops across our volume of features. The fix has to be cross-role, not single-discipline.
Drafting, synthesis, exploration, test generation, signal monitoring — the cost-per-attempt has collapsed. Some practices exist because of constraints AI has removed. Others get supercharged. A few are now genuinely possible for the first time. The leadership question is not how much AI but which category of work it's applied to.
In an agentic build environment, code can be regenerated from a well-formed spec. Spec quality becomes the primary lever for the quality of everything that follows — design generation, build, test, review. A spec is now the entry condition for the workflow, not its byproduct.
The operating detail underneath is real. But at the level that matters for most conversations, the cycle is three phases that each end at a clear checkpoint. Define sets the brief. Build turns it into a working feature. Ship puts it in front of customers.
It's not "more AI." It's a small number of structural shifts that change how the work flows. Here is what gets retired, and what replaces it.
Role accountability is fixed. The execution model — whether a role works alone, with AI assistance, or certifies AI-generated output — varies by team maturity. The framework defines who is accountable. It does not prescribe human presence at every step.
PM, Design, and Engineering co-author the spec and carry accountability at the gates in their respective phases. Architecture engages by condition — new service dependencies, public API changes, constitution-level decisions. The DS Team, QA, Tech Writing, Product Marketing, and Customer Success are part of the cycle and have defined seats in it.
The Trio model isn't procedural — it's structural. PM intent, design constraints, and engineering feasibility are not cleanly separable at the point where generation runs.
PM is the accountable owner of the problem and the spec. The spec is the durable artifact — the brief everything downstream runs against — and PM owns its sufficiency.
Design shifts from producing the visible artifact to stewarding the system that produces it — variants, validation, and pattern compounding under a grounded design constitution.
Engineering's role shifts toward directing, reviewing, and certifying AI-generated output — with structural discipline on context loading, generation per unit of work, and mandatory human checkpoints.
Not a standing Trio member. Engages on defined triggers — and the trigger set is explicit so it's neither over- nor under-invoked.
QA's leverage moves upstream. Test scaffolding from the spec means QA spends less time on mechanical authoring and more on the assertions and edge cases that AI can't anticipate.
The DS Team becomes load-bearing in a different way: design constitution authoring, companion file health, Textura MCP capability gaps, and pattern promotion approval.
The single biggest workflow change for TC: documentation work happens alongside engineering during Build, not in a release-scramble afterward. Spec-grounded drafts. Code-grounded updates.
Like TC, PMkt moves into Build rather than queueing after it. Positioning, messaging, and launch materials are staged alongside engineering, drafted from the spec, refined as the feature firms up.
CS signal becomes a structured, always-on input to discovery — and a closed-loop destination after launch. Less "reporting on what happened." More "structured signal that routes into the queue."
Teams advance based on signal from completed features, not on a timeline. No team is asked to operate at a maturity level its tooling, infrastructure, or experience hasn't earned. The pilot starts at Pre-Spec-First and demonstrates the workflow reaching Spec-First. Everything beyond is unlocked by pilot evidence.
Spec quality isn't yet reliable enough for AI to work from at scale. Every gate is human-led. AI assistance is used for drafting, synthesis, and exploration — but the workflow itself runs the way it does today.
Sufficient spec quality exists. The spec becomes the AI brief. Generation runs against constitution and companion files. Human review at G3 and G4 is fully active. This is where the pilot demonstrates the workflow.
Spec is versioned alongside the feature branch. Drift triggers a formal amendment process. Human review threshold for generation output is lighter. Trust in the generation loop has been earned through evidence.
The spec is the primary build input. AI generation operates with high reliability. Human review concentrates on judgment calls — not error catching. The role flexibility headroom is largest here.
At every maturity level — including the highest — a defined set of activities stays human-led. These aren't there because the AI can't do them. They're there because someone needs to be accountable.
This isn't a switch — it's a sequenced rollout, with each stage gated by evidence from the one before. Where teams are today informs the on-ramp; the rollout shape is calibrated from what the pilot actually shows.
Prep meetings with AI. Draft before you write. Generate ACs from a brief. Synthesize customer signal continuously. Use the three modes — agent-led, agent-assisted, human-led — deliberately. These habits don't require the pilot. They are the foundation everything else stands on.
Selected Trios operate the full FDLC structure across 2–3 features. Domain companion files authored. Constitution grounded. Track A (Copilot) and Track B (Claude Code) running side-by-side. In-flight measurement embedded in gate checklists, not bolted on retrospectively.
The first AI FDLC Review produces two outputs: the advancement decision and the rollout calibration package. Targets, sequencing, and onboarding parameters are set from pilot evidence — not pre-committed assertions. Teams onboard against criteria, not seniority.
Constitutions evolve from production evidence. Patterns compound across domains. Agent landscape expands as infrastructure thresholds are met. Teams advance through maturity tiers as their evidence permits. The framework gets better with use — when we use it well.
This shift isn't about AI being exciting. It's about removing the work that gets in the way of the work that matters — so more of our collective attention goes to the decisions, judgments, and relationships that actually shape the product. The framework above is how we get there together, with each role's contribution defined and respected, and with evidence — not enthusiasm — gating every step forward.
Build the AI habits in your team now. Help us identify pilot candidates. Bring questions and pushback to the AI FDLC Review.
To pre-commit to adoption targets or rollout pace. Those come from the pilot calibration package, not from this document.
A workflow that adapts to your domain, role accountability you can hold, and a system that compounds with the evidence the work generates.