RICE scoring was developed by Sean McBride at Intercom and published in 2015. It has since become one of the most widely used feature prioritization frameworks in product management because it balances multiple factors into a single comparable score — making it easier to rank a heterogeneous backlog of features against each other.
The core insight of RICE is that two factors that seem like they should dominate prioritization — Impact (how much does this help users?) and Effort (how long does it take?) — are both unreliable on their own. A high-impact feature that affects 5 users is less valuable than a medium-impact feature that affects 50,000. A low-effort feature that engineers are wrong about (Confidence is low) should be treated differently than a low-effort feature with strong prior evidence. RICE brings Reach and Confidence into the calculation to correct for these blind spots.
The Confidence factor is the most commonly underused element of RICE. Teams often score Impact 3 (massive) and Confidence 100% because the feature feels important — without data to justify either score. The discipline of RICE scoring is in honest Confidence estimation: 20% for a pure hypothesis, 50% for some qualitative signal, 80% for user research data, 100% for proven prior evidence. A feature scored at Impact 3 and Confidence 20% (0.20) produces a RICE contribution of 0.6 — far lower than Impact 1 and Confidence 80% (0.80), which contributes 0.8. The framework punishes overconfident guesses.
RICE vs ICE scoring
Both RICE and ICE are scoring frameworks for prioritization, but they solve slightly different problems:
RICE includes Reach (how many users) and uses Effort measured in person-weeks. It is most precise when you have data to estimate how many users a feature will affect — segment sizes, DAU counts, or customer data from your analytics tool. RICE is the better choice for teams with real user data and a meaningful range in how many users different features affect.
ICE (Impact × Confidence × Ease) scores all three factors on a 1–10 scale. Ease is the inverse of Effort — 10 means trivially easy, 1 means months of engineering. ICE is faster to run, better for early-stage products where Reach is hard to estimate, and works well for evaluating many small experiments where all features affect roughly the same user base.
The choice: use RICE when you have data; use ICE when you are moving fast and want a lighter-weight scoring process. Many teams use both: RICE for the quarterly roadmap, ICE for the weekly experiment backlog.
How to Use RICE Scoring in Product Management
To run a RICE scoring session:
- Collect all candidate features in a single list — no filtering yet.
- Estimate Reach for each feature: how many users or accounts does this affect in a quarter? Use your analytics tool for segment sizes. Be consistent — use the same time period across all features.
- Score Impact on the Intercom scale: 0.25 (minimal), 0.5 (low), 1 (medium), 2 (high), 3 (massive). Anchor to your primary metric — "what movement on our North Star metric does this feature cause per user who encounters it?"
- Score Confidence as a percentage: 20% (pure hypothesis), 50% (qualitative signal from interviews), 80% (user research data), 100% (strong prior evidence). Be honest — overconfidence inflates scores on unvalidated ideas.
- Estimate Effort in person-weeks with your tech lead. 1 = one engineer for one week. Be consistent in complexity-to-effort mapping across features.
- Calculate and sort: RICE = (R × I × C) / E. Sort descending. The top of the list is your priority order.
Run RICE scoring as a team, not solo. The tech lead brings Effort precision; a data analyst brings Reach validation; the PM brings Impact and Confidence judgment. Group scoring sessions take 60–90 minutes and produce more calibrated results than individual scoring.
RICE Scoring Examples
1RICE scoring: Slack notification feature
Feature: Automated Slack notifications for PRD status changes. Reach: 800 workspace admins per quarter. Impact: 2 (significantly reduces review delay — directly affects core activation metric). Confidence: 70% (3 user interviews confirmed the pain point, no A/B data yet). Effort: 3 person-weeks. RICE = (800 × 2 × 0.70) / 3 = 373. This feature would rank against other features by comparing their RICE scores.
2RICE scoring: mobile app feature
Feature: iOS/Android native app. Reach: 12,000 monthly active users (estimated 60% mobile-first). Impact: 2 (high — enables daily active use for mobile users who currently use mobile web). Confidence: 50% (qualitative feedback from interviews, no data on actual mobile usage). Effort: 24 person-weeks (iOS + Android native apps from scratch). RICE = (12,000 × 2 × 0.50) / 24 = 500. High RICE score — but Confidence is low. Appropriate next step: instrument mobile web usage to increase Confidence before committing.
3RICE scoring: small UX improvement
Feature: Add keyboard shortcut (Cmd+Enter) to trigger PRD generation. Reach: 3,200 power users per quarter (estimated 40% of DAU). Impact: 0.5 (low — modest time saving, secondary metric). Confidence: 90% (similar keyboard shortcuts in comparable tools show high adoption). Effort: 0.2 person-weeks (2 days of frontend work). RICE = (3,200 × 0.5 × 0.90) / 0.2 = 7,200. Very high RICE score — demonstrates how small, high-confidence improvements can outrank large features when Effort is minimal.
How Scriptonia Automates This
Scriptonia uses RICE-like reasoning internally when generating feature prioritization recommendations in product briefs. When you describe a feature, Scriptonia surfaces the estimated user impact and implementation complexity — giving you the inputs you need to run a RICE score without starting from scratch.
For teams using Linear, the generated engineering tickets include story-point estimates that map directly to RICE Effort scores. A ticket estimated at 3 story points ≈ 0.6 person-weeks, giving you a consistent Effort input for your RICE calculations.