NINTEX R&D ORIENTATION MAY 2026

The way we build is changing.
What we build for is not.

An AI-enhanced FDLC at Nintex — a single, shared view of where we are, what we're changing, and how each role moves with it.

01 / The constant
Better software, faster, at higher quality, in service of customer value.
02 / The shift
Spec quality, not activity completion, becomes the primary lever for end-to-end velocity.
03 / The method
AI assistance applied across the cycle — with human accountability at the gates that matter.
What stays the same

Goals first. Method second.

Before any conversation about workflow, tooling, or maturity arcs — these are the things we are not negotiating. Everything that follows in this document serves them.

If a new practice does not move one of these, it should not survive the shift.

The north star

  • 01Build software that genuinely solves customer problems.
  • 02Get from intent to ship faster.
  • 03Raise the quality of what reaches production.
  • 04Reduce the work that gets in the way of the work that matters.
  • 05Compound what we learn so the system gets better with use.
Why this, why now

Three constraints we are hitting at the same time.

The case for change isn't "AI is exciting." It's structural — three pressures that have built up in how we build software, all of which compound.

01

The handoff is the bottleneck — not any single role.

At a roughly 1:16 PM and designer-to-engineer ratio, handoff quality is the primary throughput lever. Designers cannot absorb rework at scale. Engineering cannot afford mid-build clarification loops across our volume of features. The fix has to be cross-role, not single-discipline.

02

AI has changed the economics of every activity in the cycle.

Drafting, synthesis, exploration, test generation, signal monitoring — the cost-per-attempt has collapsed. Some practices exist because of constraints AI has removed. Others get supercharged. A few are now genuinely possible for the first time. The leadership question is not how much AI but which category of work it's applied to.

03

Specs are durable. Code is increasingly ephemeral.

In an agentic build environment, code can be regenerated from a well-formed spec. Spec quality becomes the primary lever for the quality of everything that follows — design generation, build, test, review. A spec is now the entry condition for the workflow, not its byproduct.

The opportunity is straightforward: move the time-consuming, low-judgment tasks to AI assistance, and concentrate human attention on the decisions, relationships, and judgments that actually differentiate the product. Working principle — AI-Enhanced FDLC
The framework, simply

Three phases. One thread.

The operating detail underneath is real. But at the level that matters for most conversations, the cycle is three phases that each end at a clear checkpoint. Define sets the brief. Build turns it into a working feature. Ship puts it in front of customers.

Phase 1 of 3 Replaces: Exploration + Definition ~1 week for a standard feature

Define

Everything before a line of code is written or an architecture decision is made. The output is a constitution and a clarified spec — the two documents that drive everything else. The goal is to compress what used to be two phases and multiple meetings into one focused phase with a single Trio sign-off.

Inside this phase

  • SignalCustomer, user, and behavioral signal synthesized into a problem worth solving.
  • FrameProblem statement, feature tier, capacity and dependencies confirmed.
  • SpecTrio co-authors a single spec — PM owns framing & ACs, Design owns intent & states, Eng owns approach & contracts.
  • Sign-offSpec meets the sufficiency bar for the feature type. Trio co-signs.
Phase 2 of 3 Replaces: Design + Implement Sprint cycles — typically 2–4 sprints

Build

Spec-driven tooling — SpecKit, BMAD, or direct AI-assisted implementation — generates the plan, the tasks, and the code. The team reviews, validates, and works in parallel. The critical shift: UX design, TC content, and PMkt materials happen alongside engineering, not after it. When code is ready, everything else is ready too.

Inside this phase

  • DesignAI-assisted variant generation validated against the design constitution and design system.
  • EngContext-grounded generation per unit of work. Mandatory human checkpoints. Two-step review.
  • QAUX review gate runs first. Then automated CI/CD. Then human ACs verified against the spec.
  • ParallelTech writing, marketing collateral, and enablement materials assembled in parallel — not queued behind release.
Phase 3 of 3 Replaces: Release & Learn scramble Days, not weeks — everything is already staged

Ship

Because TC and PMkt worked in parallel during Build, there is no scramble. Code is ready, content is approved, GTM is staged. Ship is the execution of a plan that's already prepared — not the start of a new preparation phase. Release (code in prod) and Launch (GTM fires) are two separate events owned by different people.

Inside this phase

  • ReleaseCode deployed, flag off, stability window monitored. Owned by Engineering.
  • LaunchFlag flipped, GTM fires, first-hour monitoring active. Owned by PM.
  • LearnPost-ship signal — JTBD completion, friction patterns, ROI — routed back into discovery, not filed away.
  • Close-outConstitution amendment candidates surfaced. The system improves with use.
What actually changes

The shift, made concrete.

It's not "more AI." It's a small number of structural shifts that change how the work flows. Here is what gets retired, and what replaces it.

Today

Sequential, role-led, hand-off heavy

  • Specs written in isolation PM writes requirements, Design picks them up, Engineering picks them up again. Gaps surface during build.
  • Design then implement Design completes, hands to engineering. Mid-build clarification loops are common. Rework is expensive.
  • Tech writing & marketing queue after code Release becomes a scramble. Quality of supporting materials is uneven under time pressure.
  • AI used opportunistically Individuals use AI tools where they help, with no shared discipline, prompt library, or quality contract.
  • Post-ship learning is informal Signal returns to discovery via memory and attention rather than instrumented feedback loops.
Tomorrow

Trio-authored, parallel, evidence-driven

  • One spec, co-authored by the Trio PM, Design, and Engineering co-own a single spec at G2. Gaps surface before generation, not during.
  • Design and engineering work against the same brief Design generates against constitution. Engineering builds against G3-approved spec. Two threads, one anchor.
  • Supporting work happens in parallel TC, PMkt, and enablement assemble alongside Build. Ship becomes execution of a prepared plan.
  • AI use is structured Three modes — agent-led, agent-assisted, human-led — applied deliberately by phase. Spec quality is the primary AI quality lever.
  • Post-ship signal closes the loop JTBD completion rate, friction synthesis, and ROI feed structured signals back into Phase 0 discovery.
Who does what

Trio at the center. Extended cast in support.

Role accountability is fixed. The execution model — whether a role works alone, with AI assistance, or certifies AI-generated output — varies by team maturity. The framework defines who is accountable. It does not prescribe human presence at every step.

The Trio is the operating unit.

PM, Design, and Engineering co-author the spec and carry accountability at the gates in their respective phases. Architecture engages by condition — new service dependencies, public API changes, constitution-level decisions. The DS Team, QA, Tech Writing, Product Marketing, and Customer Success are part of the cycle and have defined seats in it.

The Trio model isn't procedural — it's structural. PM intent, design constraints, and engineering feasibility are not cleanly separable at the point where generation runs.

PM
Design
Engineering
Spec
Co-authored. Co-signed.
Trio · G0 · G1 · G2 · Launch

Product Management

PM is the accountable owner of the problem and the spec. The spec is the durable artifact — the brief everything downstream runs against — and PM owns its sufficiency.

What's different

  • Author specs against a defined sufficiency bar — clear enough that two engineers would scope it the same way.
  • Synthesize customer and behavioral signal continuously, with AI assistance, rather than batch-mode discovery.
  • Co-author the combined spec at G2 alongside Design and Engineering — no more PRD-then-handoff.
  • Own Launch (flag flip, GTM fire, outcome ownership) as distinct from Release.
  • Bring post-ship outcomes back to the AI FDLC Review — JTBD completion, ROI, adoption.
Trio · G2 · G3 · G4

Design / UX

Design shifts from producing the visible artifact to stewarding the system that produces it — variants, validation, and pattern compounding under a grounded design constitution.

What's different

  • Run AI-assisted variant generation against a versioned design constitution and design system.
  • Apply the UX hard gate before QA testing — design approves the built UI against the G3 handoff package.
  • Own the G3 handoff package quality — the asset Engineering builds against.
  • Author and maintain the domain design companion file — domain-specific context for the agents.
  • Surface pattern candidates and constitution amendment candidates at close-out.
Trio · G2 · G3 · G4 · Release

Engineering

Engineering's role shifts toward directing, reviewing, and certifying AI-generated output — with structural discipline on context loading, generation per unit of work, and mandatory human checkpoints.

What's different

  • Co-author the spec's technical sections — data shape, state list, service hooks, cross-team contracts.
  • Run AI-assisted implementation per unit of work against the engineering constitution and companion file.
  • Four mandatory human checkpoints — visual change, constitutional compliance, contract adherence, judgment calls.
  • Two-step review — run it, then read it. AI-generated code never merges unreviewed.
  • Author and maintain the domain engineering companion file alongside the design layer.
Engages by condition

Architecture

Not a standing Trio member. Engages on defined triggers — and the trigger set is explicit so it's neither over- nor under-invoked.

When Architecture is in the room

  • New service introduced or new external dependency declared.
  • Public API change or contract modification.
  • AI-generated implementation pattern surfaces a constitution-level question.
  • Cross-team contract amendment or constitution-level decision required.
  • Pre-review at G3 flags a novel pattern not yet covered by existing decisions.
In the cycle at G4 — every feature

QA

QA's leverage moves upstream. Test scaffolding from the spec means QA spends less time on mechanical authoring and more on the assertions and edge cases that AI can't anticipate.

What's different

  • Test cases mapped to acceptance criteria directly from the spec at G2 — no late-stage discovery of testing gaps.
  • UX review gate runs first; technical QA after design approves the built UI.
  • CI/CD gates (accessibility, DS compliance, visual regression) clear before human QA touches the build.
  • Full regression at G4 and again at Release — distinct events, distinct purposes.
  • QA signal feeds the AI FDLC Review — escape rate by feature track and tier.
Standing role — Loop 1 & Loop 2 cadence

DS Team

The DS Team becomes load-bearing in a different way: design constitution authoring, companion file health, Textura MCP capability gaps, and pattern promotion approval.

What's different

  • Author and maintain the design constitution — the rule set agents use at generation time.
  • Author the platform constitution where layout-sensitive or multi-domain composition is in scope.
  • Monitor companion file staleness — context quality is the generation quality ceiling.
  • Approve pattern promotions from feature-level candidates into the design system.
  • Sit on the AI FDLC Review for constitution amendment decisions.
In Build, not after it

Technical Writing

The single biggest workflow change for TC: documentation work happens alongside engineering during Build, not in a release-scramble afterward. Spec-grounded drafts. Code-grounded updates.

What's different

  • Docs drafted from the spec at G2 — AI-assisted first drafts grounded in actual acceptance criteria.
  • Iteration tracks Build, not lags it — when code is ready, docs are ready too.
  • Docs continuity agent flags drift between shipped behavior and published documentation.
  • Voice and style governance applied through the same constitution model the rest of the system uses.
  • Practitioner-facing FDLC documentation is a TC + AI working group co-output.
In Build, not after it

Product Marketing

Like TC, PMkt moves into Build rather than queueing after it. Positioning, messaging, and launch materials are staged alongside engineering, drafted from the spec, refined as the feature firms up.

What's different

  • Launch readiness assembled in parallel with the build — Ship is execution, not preparation.
  • GTM brief drafted from the spec early — refined as design and engineering output stabilizes.
  • Customer-facing messaging grounded in the same problem statement and AC set the Trio works against.
  • Post-launch outcome data shared back into the AI FDLC Review — what landed, what didn't.
  • Seat on the AI FDLC Review group for rollout signal evaluation.
Continuous signal source — Phase 0 & Phase 8

Customer Success

CS signal becomes a structured, always-on input to discovery — and a closed-loop destination after launch. Less "reporting on what happened." More "structured signal that routes into the queue."

What's different

  • Call recordings, escalation patterns, and CSM signals synthesized continuously by VoC and behavioral agents.
  • CS signal briefs are an input PM is accountable for reading and dispositioning at G0.
  • JTBD completion rate and friction synthesis post-ship route findings back to CS for context.
  • Feature ROI calculations include support cost delta — CS data is part of the business case retroactively.
  • The signal loop is structural, not memory-dependent.
How teams grow into it

Maturity is evidence-gated, not calendar-gated.

Teams advance based on signal from completed features, not on a timeline. No team is asked to operate at a maturity level its tooling, infrastructure, or experience hasn't earned. The pilot starts at Pre-Spec-First and demonstrates the workflow reaching Spec-First. Everything beyond is unlocked by pilot evidence.

Where most teams are

Pre-Spec-First

Spec quality isn't yet reliable enough for AI to work from at scale. Every gate is human-led. AI assistance is used for drafting, synthesis, and exploration — but the workflow itself runs the way it does today.

Pilot target

Spec-First

Sufficient spec quality exists. The spec becomes the AI brief. Generation runs against constitution and companion files. Human review at G3 and G4 is fully active. This is where the pilot demonstrates the workflow.

Post-pilot unlock

Spec-Anchored

Spec is versioned alongside the feature branch. Drift triggers a formal amendment process. Human review threshold for generation output is lighter. Trust in the generation loop has been earned through evidence.

Long horizon

Spec-as-Source

The spec is the primary build input. AI generation operates with high reliability. Human review concentrates on judgment calls — not error catching. The role flexibility headroom is largest here.

Constitution before autonomy. Agents operating without a defined set of rules produce plausible outputs that subtly violate constraints — so the design and engineering constitutions are implemented first, not later. Operating principle
What stays human, always

Some judgments are never delegated.

At every maturity level — including the highest — a defined set of activities stays human-led. These aren't there because the AI can't do them. They're there because someone needs to be accountable.

Always human-led

  • Go / No-Go judgment
  • Architecture decisions
  • Constitution rules
  • All code review
  • UX review of built screens
  • QA sign-off
  • All discipline gate approvals
  • Release authorization
  • Retrospective judgment
Where this goes from here

The path from today to scale.

This isn't a switch — it's a sequenced rollout, with each stage gated by evidence from the one before. Where teams are today informs the on-ramp; the rollout shape is calibrated from what the pilot actually shows.

Now — every team

Build the habits.

Prep meetings with AI. Draft before you write. Generate ACs from a brief. Synthesize customer signal continuously. Use the three modes — agent-led, agent-assisted, human-led — deliberately. These habits don't require the pilot. They are the foundation everything else stands on.

Available with current tools
Soon — pilot teams

Run the workflow.

Selected Trios operate the full FDLC structure across 2–3 features. Domain companion files authored. Constitution grounded. Track A (Copilot) and Track B (Claude Code) running side-by-side. In-flight measurement embedded in gate checklists, not bolted on retrospectively.

Pilot · Setup Complete pending
Next — rollout

Calibrate, then scale.

The first AI FDLC Review produces two outputs: the advancement decision and the rollout calibration package. Targets, sequencing, and onboarding parameters are set from pilot evidence — not pre-committed assertions. Teams onboard against criteria, not seniority.

Post-pilot · evidence-gated
Then — compounding

The system improves.

Constitutions evolve from production evidence. Patterns compound across domains. Agent landscape expands as infrastructure thresholds are met. Teams advance through maturity tiers as their evidence permits. The framework gets better with use — when we use it well.

Standing AI FDLC Review cadence
The frame to hold

The job stays the same. The way the job gets done is upgrading itself.

This shift isn't about AI being exciting. It's about removing the work that gets in the way of the work that matters — so more of our collective attention goes to the decisions, judgments, and relationships that actually shape the product. The framework above is how we get there together, with each role's contribution defined and respected, and with evidence — not enthusiasm — gating every step forward.

What we're asking

Build the AI habits in your team now. Help us identify pilot candidates. Bring questions and pushback to the AI FDLC Review.

What we're not asking

To pre-commit to adoption targets or rollout pace. Those come from the pilot calibration package, not from this document.

What you can expect

A workflow that adapts to your domain, role accountability you can hold, and a system that compounds with the evidence the work generates.