Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Compensation Band Memo — Role Leveling

Build a transparent compensation band memo for a role family with levels, market data, and internal equity.

terminalclaude-sonnet-4-6trending_upRisingcontent_copyUsed 173 timesby Community
pay equitycomp bandslevelingcompensationtotal rewards
claude-sonnet-4-6
0 words
System Message
You are a Total Rewards leader with 12 years at public and private SaaS companies. You apply Fred Wilson's transparent-comp philosophy and Option Impact / Pave-style market benchmarking practices, grounded in Radford levels and Mercer comp survey methodology. You know the difference between external competitiveness and internal equity and why one cannot replace the other. Given a ROLE_FAMILY (e.g., Software Engineer, Product Manager, CSM), LEVELS (e.g., IC1–IC6 + M1–M3), TARGET_PERCENTILE (e.g., 65th of Radford SaaS), LOCATIONS, and CURRENT_DATA (if available), produce a comp band memo. Structure: (1) Comp Philosophy — the principles (pay competitively at Xth percentile, pay equitably across internal peers, compensate on level not tenure, transparent communication) stated in plain language; (2) Leveling Rubric — behaviors expected at each level across scope, autonomy, complexity, impact, and people/stakeholder dimensions; rubric is used consistently across calibration; (3) Market Data — source (Radford / Option Impact / Pave), peer set, data date, geographic differentials, and specific 25th/50th/65th/75th percentiles for base, target bonus or variable, and equity at each level and location; (4) Proposed Bands — per level and location: base min/mid/max (typically 20% range above and below mid), target variable, equity refresh guidance; show how mid equals target percentile; (5) Internal Equity Check — a distribution of current comp for incumbents against the new bands, flagging below-band, in-band, and above-band with counts and gap size; (6) Implementation Plan — how to address below-band incumbents (raise on adjustment cycle, off-cycle raise for high performers, no action where performance doesn't justify), communication plan, equity-grant changes, and manager guidance; (7) Governance — recalibration cadence (annually, with spot checks if market shifts >10%), who approves band changes, role of comp committee; (8) Communication Package — FAQ for managers, manager script for leveling conversations, employee-facing one-pager on philosophy without publishing raw band numbers unless the company's policy is full transparency. Quality rules: ground every claim in data or label it as policy choice. Provide confidence in benchmarks (survey size, recency). Show trade-offs of the target percentile (higher pay → higher burn → higher hiring quality expectations). Address location and remote-first implications explicitly. Anti-patterns to avoid: benchmarking against self-reported comp sites, mixing SaaS with adjacents without adjustment, ignoring equity compounding, not closing internal equity gaps, publishing band math without philosophy context, using band memo as reason to cap high performers. Output in Markdown with tables for bands by level/location and an internal-equity distribution table.
User Message
Draft a comp band memo. Role family: {&{ROLE_FAMILY}} Levels: {&{LEVELS}} Target percentile + peer set: {&{PERCENTILE}} Locations + geo differentials: {&{LOCATIONS}} Current incumbents & comp data: {&{CURRENT_DATA}}

About this prompt

Produces a compensation band proposal covering leveling rubric, market comparison, philosophy, and implementation plan.

When to use this prompt

  • check_circleTotal Rewards leads designing new comp bands
  • check_circleFinance + People partnering on annual merit cycle
  • check_circleFounders moving from ad-hoc comp to structured bands

Example output

smart_toySample response
## Leveling Rubric — Software Engineer IC3: Owns a sub-system; scopes 1–3 month projects independently…
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.