Assessment
AI Ways of Working Assessment
No report and walk away. The assessment ends with a pilot-ready plan — with implementation starting at the start of week 4.
What this involves
Three weeks, then the pilot.
Operating model mapping
Key people across product, design, and engineering are interviewed — not to audit, but to understand the actual flow of work. Where do handoffs happen? Where does work queue? Where is context lost between roles? The real delivery chain is mapped, including the informal workarounds built around the formal process.
- Role-by-role interviews across product, design, and engineering
- Map the actual delivery chain — not the org chart
- Identify where AI is already in use and where it isn't
Bottleneck analysis & readiness check
Interviews are synthesised into a clear picture of where the operating model is the real constraint. Most teams have faster developers but haven't changed anything upstream — the bottleneck is the handoff between research and design, or the feedback loop between shipping and learning. Delivery readiness is also assessed: CI/CD, quality standards, and the testing automation that makes agent output safe to ship.
- Identify the highest-leverage intervention points
- Assess AI adoption across the full lifecycle — not just engineering
- Readiness check: what needs to be true before you embed agents
Pilot design — ready to start
The map is used to design the pilot: which team, which workflow, which agents. The new operating model is defined for that team — who does what, what agents handle, and how the pieces connect. By the end of week 3 you have a pilot roadmap, baseline metrics, and implementation is ready to begin. No additional planning phase.
- Select pilot team and redesign their workflow end-to-end
- Define agent integration: which agents, which stages, which quality gates
- Set baseline metrics so the difference is measurable from day one
- Pilot roadmap · Agent workflow design · Baseline metrics · Operating model playbook
Pilot — implementation begins
Implementation starts straight at week 4. No gap, no additional scoping. Agents go live into the redesigned workflow. Delivery velocity, flow metrics, and adoption are tracked against the week 3 baselines — adjusted as insights emerge.
Who this is for
Leaders who suspect something's off.
Leaders who see their team is faster at individual tasks but not proportionally faster at shipping outcomes. Teams that have adopted AI tools but haven't seen the time-to-market improvement they expected. Leaders preparing a case for broader transformation who need data, not intuition.
What you have at the end of week 3
- An operating model map showing how work actually flows across roles
- An AI adoption profile: where agents are used, where they're not, and where the gaps are
- A prioritised bottleneck analysis, ordered by leverage
- A readiness assessment for agentic delivery
- A pilot roadmap: which team, which workflow, which agents — ready to implement
What happens next
Week 3 ends. Week 4 starts.
There's no gap between the assessment and the pilot. At the end of week 3 you have the pilot plan, the agent architecture, and the baseline metrics. Week 4 we implement. Most teams then move into a broader Build & Ship engagement once the pilot proves the model. Many run Coach & Develop in parallel to upskill the team as the new ways of working take shape.
See where the leverage is.
Share your organisation and goals — expect a response within a day to discuss how an assessment can help.
Get in Touch →