Assessment
Example Report
A mock assessment report for a Product & Tech Team — showing the structure and depth of a typical engagement output. What's found, what's recommended, and what the pilot plan looks like at the end of week 3.
Mock Client
Acme Digital
Approach
The assessment ran over three weeks. Interviews across product, design, and engineering — not to audit, but to understand the actual flow of work. The real delivery chain was mapped, the operating model constraint identified, and a pilot designed ready to implement with no additional planning phase.
- Week 1 — Operating model mapping
Role-by-role interviews across product, design, and engineering · Map the actual delivery chain — not the org chart · Identify where AI is already in use and where it isn't - Week 2 — Bottleneck analysis & readiness check
Identify the highest-leverage intervention points · Assess AI adoption across the full lifecycle — not just engineering · Readiness check: what needs to be true before you embed agents - Week 3 — Pilot design — ready to start
Select pilot team and redesign their workflow end-to-end · Define agent integration: which agents, which stages, which quality gates · Set baseline metrics so the difference is measurable from day one
Current State Flow
The end-to-end product delivery flow was mapped from idea intake through to production release and feedback. Bottlenecks and pain points are highlighted in the flow below.
Findings & Root Causes
Inconsistent delivery practices
Each squad runs a different variant of Scrum. No shared definition of done, no common backlog hygiene standards.
Root cause: No PDLC playbook or squad-level operating model — each lead improvised as the team grew.
Slow cycle time (11-day avg)
Work sits in "In Review" or "Ready for QA" for 3–4 days on average before progressing.
Root cause: Manual testing bottleneck + no WIP limits + PR review round-trips averaging 2.3 cycles.
AI adoption near zero
Two engineers use Copilot informally. No team-level AI tooling, no policy, no structured evaluation of use cases.
Root cause: Leadership unsure where to start; no governance framework; perceived IP/security risk with no clear guidelines.
Discovery disconnected from delivery
Product managers run discovery in isolation. No dual-track cadence — discoveries land as large, undefined epics.
Root cause: No discovery playbook. PM capacity stretched across too many squads (1 PM : 3 squads).
Limited metrics & visibility
No team-level dashboards. Leadership relies on weekly status updates written manually by squad leads.
Root cause: Jira configured inconsistently across squads; no data engineering capacity to build reporting.
Release gating by ops
All releases go through a single ops engineer who runs manual deployment checklists. Max 1 release/week per squad.
Root cause: No CI/CD pipeline maturity; deployment process never automated beyond basic scripts.
Maturity Assessment
Each dimension is scored 1–5 based on interview evidence, artefact review, and survey data. The spider chart shows the current state baseline; target state (dashed) represents a realistic 6-month goal.
Current State 6-Month Target
Recommendations
Prioritised by impact and feasibility. Quick wins first, then structural changes.
Standardise definition of done & backlog hygiene
Align all 6 squads on a shared DoD, story format, and refinement checklist. Estimated: 1–2 sprints.
Introduce WIP limits and flow metrics dashboards
Configure Jira boards with WIP limits and deploy a shared dashboard (cycle time, throughput, age). Estimated: 1 sprint.
Deploy AI coding assistant across all squads
Roll out GitHub Copilot with usage policy, governance guardrails, and squad-level AI champions. Estimated: 2–3 sprints.
Establish dual-track discovery cadence
Create discovery playbook; reallocate PM capacity (target 1 PM : 2 squads max); introduce weekly discovery sync. Estimated: 2–4 sprints.
Automate CI/CD and remove release gating
Build pipeline automation, feature flags, and automated smoke tests to enable continuous deployment. Estimated: 4–6 sprints.
AI-assisted QA and document generation
Implement AI-powered test generation, PRD drafting, and acceptance criteria writing across the PDLC. Estimated: 4–8 sprints.
AI Implementation Approach
Agents are embedded directly into the pilot workflow — not as a separate AI initiative. The approach follows a crawl → walk → run model aligned to delivery readiness and quality gates.
Foundation & Governance
Establish AI usage policy and security guardrails. Deploy Copilot to the pilot squad. Define quality gates for agent-generated output. Train the pilot team on prompt patterns and safe use.
Agents in the Workflow
AI-assisted code review, PR summarisation, and acceptance criteria generation live in the pilot squad. Fortnightly upskilling sessions. Measure cycle time and flow velocity against week 3 baselines.
Advanced Use Cases
AI test generation in CI pipeline. PRD and spec drafting agents. Discovery synthesis agents. Evaluate results against baselines and decide whether to expand to additional squads.
Pilot Plan
Assessment completes at the end of week 3. The pilot starts week 4 with no gap. This aligns to the sample roadmap.
| Workstream | Weeks 1–2 Discovery | Week 3 Pilot design | Week 4+ Implement |
|---|---|---|---|
| Operating model & delivery flow | Stakeholder interviews; process mapping; AI adoption audit | Bottleneck analysis; pilot squad selected | — |
| Pilot design & agents | — | Workflow redesigned; agent architecture; quality gates defined | Agents embedded; new ways of working live; iterate |
| Measurement & baselines | Readiness check; baseline data collection begins | Baseline metrics set — cycle time, flow, AI adoption | Flow metrics tracked; velocity vs. baseline weekly |
| Team enablement | — | Ways of working playbook (draft) | Squad coaching on new operating model; AI upskilling |
→ View the full interactive roadmap to see how workstreams and sprints are structured.
Want a report like this for your organisation?
Every assessment is tailored to your context — your teams, your industry, your goals. A short conversation is all it takes to get started.