← Back to Start with an Assessment

Example Report

A mock assessment report for a Product & Tech Team — showing the structure and depth of a typical engagement output. What's found, what's recommended, and what the pilot plan looks like at the end of week 3.

Acme Digital

Mid-size SaaS company · 6 product squads · ~80 engineers · Series B

Approach

The assessment ran over three weeks. Interviews across product, design, and engineering — not to audit, but to understand the actual flow of work. The real delivery chain was mapped, the operating model constraint identified, and a pilot designed ready to implement with no additional planning phase.

  • Week 1 — Operating model mapping
    Role-by-role interviews across product, design, and engineering · Map the actual delivery chain — not the org chart · Identify where AI is already in use and where it isn't
  • Week 2 — Bottleneck analysis & readiness check
    Identify the highest-leverage intervention points · Assess AI adoption across the full lifecycle — not just engineering · Readiness check: what needs to be true before you embed agents
  • Week 3 — Pilot design — ready to start
    Select pilot team and redesign their workflow end-to-end · Define agent integration: which agents, which stages, which quality gates · Set baseline metrics so the difference is measurable from day one

Current State Flow

The end-to-end product delivery flow was mapped from idea intake through to production release and feedback. Bottlenecks and pain points are highlighted in the flow below.

💡
Idea Intake Ad-hoc Slack requests, stakeholder emails, no single backlog
📋
Prioritisation No clear framework — HiPPO-driven, inconsistent across squads ⚠ Bottleneck
✏️
Refinement Varies by squad: 30 min → 2 hrs; inconsistent acceptance criteria
🔨
Build & QA Avg cycle time 11 days; manual testing dominant
🚀
Release Manual deploy process; 1 release/week gated by ops team ⚠ Bottleneck
📊
Feedback No structured feedback loop back to discovery

Findings & Root Causes

Inconsistent delivery practices

Each squad runs a different variant of Scrum. No shared definition of done, no common backlog hygiene standards.

Root cause: No PDLC playbook or squad-level operating model — each lead improvised as the team grew.

Slow cycle time (11-day avg)

Work sits in "In Review" or "Ready for QA" for 3–4 days on average before progressing.

Root cause: Manual testing bottleneck + no WIP limits + PR review round-trips averaging 2.3 cycles.

AI adoption near zero

Two engineers use Copilot informally. No team-level AI tooling, no policy, no structured evaluation of use cases.

Root cause: Leadership unsure where to start; no governance framework; perceived IP/security risk with no clear guidelines.

Discovery disconnected from delivery

Product managers run discovery in isolation. No dual-track cadence — discoveries land as large, undefined epics.

Root cause: No discovery playbook. PM capacity stretched across too many squads (1 PM : 3 squads).

Limited metrics & visibility

No team-level dashboards. Leadership relies on weekly status updates written manually by squad leads.

Root cause: Jira configured inconsistently across squads; no data engineering capacity to build reporting.

Release gating by ops

All releases go through a single ops engineer who runs manual deployment checklists. Max 1 release/week per squad.

Root cause: No CI/CD pipeline maturity; deployment process never automated beyond basic scripts.

Maturity Assessment

Each dimension is scored 1–5 based on interview evidence, artefact review, and survey data. The spider chart shows the current state baseline; target state (dashed) represents a realistic 6-month goal.

Discovery(2.2)
Delivery(2.8)
AI Adoption(1.4)
Ways of Working(2.5)
Metrics & Visibility(1.8)
Playbooks & Standards(1.6)

Current State   6-Month Target

Recommendations

Prioritised by impact and feasibility. Quick wins first, then structural changes.

Quick Win

Standardise definition of done & backlog hygiene

Align all 6 squads on a shared DoD, story format, and refinement checklist. Estimated: 1–2 sprints.

Quick Win

Introduce WIP limits and flow metrics dashboards

Configure Jira boards with WIP limits and deploy a shared dashboard (cycle time, throughput, age). Estimated: 1 sprint.

Medium Term

Deploy AI coding assistant across all squads

Roll out GitHub Copilot with usage policy, governance guardrails, and squad-level AI champions. Estimated: 2–3 sprints.

Medium Term

Establish dual-track discovery cadence

Create discovery playbook; reallocate PM capacity (target 1 PM : 2 squads max); introduce weekly discovery sync. Estimated: 2–4 sprints.

Strategic

Automate CI/CD and remove release gating

Build pipeline automation, feature flags, and automated smoke tests to enable continuous deployment. Estimated: 4–6 sprints.

Strategic

AI-assisted QA and document generation

Implement AI-powered test generation, PRD drafting, and acceptance criteria writing across the PDLC. Estimated: 4–8 sprints.

AI Implementation Approach

Agents are embedded directly into the pilot workflow — not as a separate AI initiative. The approach follows a crawl → walk → run model aligned to delivery readiness and quality gates.

Crawl

Foundation & Governance

Establish AI usage policy and security guardrails. Deploy Copilot to the pilot squad. Define quality gates for agent-generated output. Train the pilot team on prompt patterns and safe use.

Week 3 (assessment) → Week 4 (pilot launch) · See roadmap

Walk

Agents in the Workflow

AI-assisted code review, PR summarisation, and acceptance criteria generation live in the pilot squad. Fortnightly upskilling sessions. Measure cycle time and flow velocity against week 3 baselines.

Weeks 4–6 · Pilot in full swing · See roadmap

Run

Advanced Use Cases

AI test generation in CI pipeline. PRD and spec drafting agents. Discovery synthesis agents. Evaluate results against baselines and decide whether to expand to additional squads.

Weeks 6+ · Scaling what works · See roadmap

Pilot Plan

Assessment completes at the end of week 3. The pilot starts week 4 with no gap. This aligns to the sample roadmap.

WorkstreamWeeks 1–2 DiscoveryWeek 3 Pilot designWeek 4+ Implement
Operating model & delivery flowStakeholder interviews; process mapping; AI adoption auditBottleneck analysis; pilot squad selected
Pilot design & agentsWorkflow redesigned; agent architecture; quality gates definedAgents embedded; new ways of working live; iterate
Measurement & baselinesReadiness check; baseline data collection beginsBaseline metrics set — cycle time, flow, AI adoptionFlow metrics tracked; velocity vs. baseline weekly
Team enablementWays of working playbook (draft)Squad coaching on new operating model; AI upskilling

→ View the full interactive roadmap to see how workstreams and sprints are structured.

Want a report like this for your organisation?

Every assessment is tailored to your context — your teams, your industry, your goals. A short conversation is all it takes to get started.