EXECUTION

Product metrics guide: which metrics to track, how to define them, and what to do when they move

Most teams track too many metrics and act on too few. Here's how to define the 5–8 metrics that actually drive product decisions — and how to build the review cadence that makes them useful.

Jun 10, 2026Updated: Jun 10, 20267 min readBy Scriptonia

The average product team tracks 20+ metrics and makes product decisions based on 3 of them. The gap between metrics tracked and metrics acted on is the largest measurable waste in product analytics (internal study, 2026). The fix is not better dashboards — it's fewer metrics with clearer ownership and faster review loops.

"Every team I've worked with that struggled with metrics had the same problem: too many metrics, no clear owner, no action threshold. Pick 5 metrics, assign an owner to each, define what 'broken' looks like for each, and review weekly. Everything else is noise."

— Jenny T., Head of Data at a product-led SaaS company

The metric framework: four categories

Primary metric (1): The north star metric. The single number that, if it moves positively, means the product is working. Examples: weekly active users, tasks completed per user, revenue per seat.

Leading indicators (2–3): Metrics that predict the primary metric 2–4 weeks in advance. Examples: activation rate, time-to-first-value, D7 retention. These are the metrics you pull levers on.

Guardrail metrics (2–3): Metrics you must not hurt while improving the primary metric. Examples: support ticket volume, NPS, page load time. If guardrail metrics degrade, stop — even if the primary metric is improving.

Health metrics (ongoing): Baseline system health indicators — error rate, latency, uptime. Engineering owns these, but PMs need visibility when they affect product experience.

How to define a good success metric

CriterionGood metricBad metric
Specific7-day activation rateEngagement
Measurable% users who create ≥3 documents in 7 daysUser satisfaction
ActionableTime-to-first-successful-exportOverall product quality
Baseline existsCurrent 7-day activation: 28%No baseline data
Target definedTarget: 45% by Q3 endTarget: "improve"

The metric review cadence

Weekly: review leading indicators (activation, retention). Flag anomalies within 48 hours. Monthly: full metric review — primary metric, leading indicators, guardrails. Quarterly: review metric relevance — are we still measuring the right things? Post-launch: review all feature metrics at 30 days and 90 days. Add 90-day reviews to the PRD as a requirement before ship.

Frequently asked questions

What product metrics should a PM track?

Track 5–8 metrics max: one north star (primary metric), 2–3 leading indicators that predict it, and 2–3 guardrail metrics you can't hurt. Common examples: primary metric (weekly active users or revenue per seat), leading indicators (activation rate, D7 retention, time-to-value), guardrails (NPS, support ticket volume, error rate). More than 8 metrics usually means the team lacks metric discipline.

What is a north star metric?

The north star metric is the single metric that best captures the core value your product delivers to users — if it moves positively, the product is working. Examples: Spotify (time spent listening), Airbnb (nights booked), Slack (messages sent by connected teams), Figma (collaboration sessions). The test: if the north star metric improves, are users genuinely better off? If yes, it's the right metric.

What is a guardrail metric?

A guardrail metric is a metric you must not hurt while pursuing your primary goal. Example: if you're optimizing for activation rate, your guardrail might be support ticket volume — if activation improves but support tickets spike, you've broken something in the user experience. Guardrail metrics prevent optimization that creates collateral damage.

How do you define success metrics before a feature launches?

Define metrics in the PRD before development starts: primary metric (what the feature is designed to move), baseline (current value), 30-day target, 90-day target, and instrumentation plan (how you'll measure it). The instrumentation plan is critical — if you can't measure a metric in your analytics tool, it doesn't exist. Book the 30-day and 90-day metric reviews before launch.

What's the difference between a lagging and a leading indicator?

A leading indicator moves before the primary outcome and predicts it — activation rate predicts retention. A lagging indicator reflects what already happened — revenue, churn, NPS. PMs should act primarily on leading indicators because lagging indicators confirm what happened; leading indicators give you time to intervene. Identify your 2–3 most predictive leading indicators and review them weekly.

Try Scriptonia free

Turn your next idea into a production-ready PRD in under 30 seconds. No account required to start.

Generate a PRD →
← All articles
© 2026 Scriptonia[ CURSOR FOR PMS ]