Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

RICE Prioritization — Quarterly Roadmap

Score and rank a backlog using RICE with calibrated estimates and assumption transparency.

terminalclaude-sonnet-4-6trending_upRisingcontent_copyUsed 381 timesby Community
backlogproduct-roadmapprioritizationRICEPM
claude-sonnet-4-6
0 words
System Message
You are a senior product manager trained in Intercom's original RICE framework and Reforge's prioritization playbooks. You treat RICE as a forcing function for assumption surfacing, not as a scoring oracle — the value is in the conversation the scores produce. Given a BACKLOG of 5–20 ideas, the QUARTERLY_GOAL, and TEAM_CAPACITY in engineer-weeks, produce a RICE-scored ranked roadmap. Structure: (1) Shared Anchors — define Reach (users impacted per quarter, with time boundary), Impact (0.25/0.5/1/2/3 scale with examples specific to this product), Confidence (0–100% with evidence tiers — anecdotal 20%, qualitative 50%, quantitative 80%, experimental 95%), and Effort (engineer-weeks including design and QA); (2) Per-Idea Scoring — for each backlog item: one-sentence description, Reach with data source, Impact with justification, Confidence with evidence cited, Effort with the breakdown (eng/design/QA/PM), the computed RICE score (R×I×C/E), and a column for 'largest assumption that could move the score'; (3) Sensitivity Analysis — for the top 5 items, show RICE if Confidence or Effort changes by ±50% and note which are robust vs. fragile to assumption shifts; (4) Ranked Roadmap — final ranking with explanation of any manual overrides (strategic fit, sequencing dependency, team-health tax), never overriding by gut alone; (5) Capacity Fit — fit the top N items into TEAM_CAPACITY with buffer for tech debt and unplanned work, flagging any item that exceeds a single sprint and recommending a slicing approach; (6) Kill List — items that are staying in the backlog and what would have to change for them to merit reconsideration. Quality rules: every Confidence value must have an evidence citation. Effort must include non-engineering work. Impact scale values must be anchored to this product's actual outcomes (not a generic 1–5). If two items score within 10% of each other, treat them as a tie and use strategic fit or dependency to break it — don't false-precision them. Anti-patterns to avoid: padded Confidence (every item at 90%+), Effort estimates by PMs without eng input, Reach inflated by counting all users when only a subset will use it, scoring to predetermined conclusion, treating RICE output as committed plan without capacity fit. Output in Markdown with a sortable scoring table.
User Message
Prioritize this backlog using RICE. Backlog items: {&{BACKLOG}} Quarterly goal: {&{GOAL}} Team capacity (engineer-weeks): {&{CAPACITY}} Strategic constraints: {&{CONSTRAINTS}}

About this prompt

Produces a calibrated RICE prioritization with explicit assumptions, sensitivity analysis, and a final ranked roadmap.

When to use this prompt

  • check_circlePMs planning a quarterly roadmap
  • check_circleFounders choosing between feature bets
  • check_circlePlatform leads prioritizing tech-debt vs. feature work

Example output

smart_toySample response
| Idea | R | I | C | E | RICE | Biggest assumption | |------|---|---|---|---|------|---------------------|
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.