Flow Metrics
These metrics describe how work moves through your system.
Velocity
The sum of story points for all completed stories in a sprint. Use a rolling average of the last 3–6 sprints for planning.
Avg Velocity = Σ(sprint points, last N sprints) / N
Cycle Time
Time elapsed from when a work item moves to "In Progress" until it reaches "Done".
Cycle Time = Done Date − Start Date
Lead Time
Total time from when a request is made to when it is delivered. Includes queue time before work begins.
Lead Time = Done Date − Request Date
= Queue Time + Cycle Time
Throughput
The number of work items completed per unit time (week or sprint). Use alongside or instead of velocity for teams not using story points.
Throughput = Items Completed / Time Period Little's Law: WIP = Throughput × Cycle Time
Sprint Burndown
A chart showing remaining work (story points or tasks) versus time remaining in the sprint. Flat lines reveal blockers; lines that rise mid-sprint indicate scope creep.
Release Burnup
A chart showing completed work versus total scope over time. Unlike burndown, burnup makes scope changes visible — the total line moves up when scope is added.
Escaped Defects
Bugs that reach customers after a release — not caught by the team's own testing. The lower this number, the stronger the Definition of Done and test coverage.
Escaped Defect % = Post-release Bugs / Total Bugs × 100 Target: <10% escape rate
Estimation Techniques
Choose based on your team's maturity, data availability, and forecasting needs.
Three-Point PERT
Combines three scenarios into a weighted average. The standard deviation quantifies how uncertain the estimate is.
E = (O + 4M + P) / 6 σ = (P − O) / 6 Where: O = Optimistic M = Most Likely P = Pessimistic
Monte Carlo Simulation
Use historical throughput data to simulate thousands of possible futures and produce a probability distribution for delivery dates.
Steps: 1. Collect throughput data (items/sprint, last 10+ sprints) 2. Count remaining backlog items 3. Simulate 10,000 sprints by sampling from historical throughput 4. Plot completion dates as a distribution Output: → 50th percentile = "likely" date → 85th percentile = commitment date (external stakeholders) → 95th percentile = contractual guarantee
Anti-Patterns
| Anti-Pattern | Why It's Harmful | Better Approach |
|---|---|---|
| Cross-team velocity comparison | Different teams inflate points to appear faster | Track throughput % improvement within the same team |
| Velocity as individual KPI | Creates point gaming; destroys team cohesion | Measure team outcomes, not individual output |
| 100% sprint commitment | No buffer for bugs, unplanned work, or helping teammates | Target 70–80% planned capacity |
| Estimates as commitments | Estimates are probabilistic; treating them as binary creates blame culture | Communicate ranges and confidence levels explicitly |
| Velocity as the only metric | Ignores quality, sustainability, and customer value | Balance with defect rate, NPS, cycle time |
| Averaging cycle time | One 30-day outlier destroys the mean | Use 85th percentile for commitments; median for reporting |
Formula Cheat Sheet
Flow Metrics Lead Time = Done Date − Request Date Cycle Time = Done Date − Start Date Throughput = Items Done / Time Period Little's Law = WIP = Throughput × Cycle Time Velocity & Forecasting Velocity (avg) = Σ(points, last N sprints) / N Forecast Date = Today + (Scope Remaining / Avg Velocity) × Sprint Length Monte Carlo 85% → use for external commitments Estimation (PERT) Expected (E) = (O + 4M + P) / 6 Std Deviation = (P − O) / 6 Communicate as → E ± σ Quality Escaped Defect % = Post-release Bugs / Total Bugs × 100 Defect Density = Defects / Story Points (or KLOC)