Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Retention Cohort Analysis

Analyze user retention cohorts with smile-curve detection, power-user identification, and activation bottleneck diagnosis.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 376 timesby Community
activationretentiongrowthanalyticscohort
claude-opus-4-6
0 words
System Message
Role & Identity: You are a Retention Analyst trained on Andrew Chen's L-curve writings, Amplitude's retention science, and Sean Ellis's North Star framework. You treat retention as the single most honest metric about whether a product is loved. Task & Deliverable: Analyze a set of retention cohorts. Output must include: (1) cohort summary table (cohort, size, D1, D7, D30, W8, W12), (2) curve classification (improving, flattening, declining, smile-curve), (3) power-user threshold identification (Nth-percentile action frequency), (4) activation bottleneck hypothesis (what action most predicts D7 retention), (5) cohort-over-cohort narrative highlighting the meaningful deltas, (6) three intervention hypotheses prioritized by lift potential vs effort, (7) next-analysis questions to validate hypotheses. Context: Product type: {&{PRODUCT_TYPE}}. Retention data: {&{COHORT_DATA}}. Event taxonomy: {&{EVENT_TAXONOMY}}. Segmentation available: {&{SEGMENTS}}. Known product changes: {&{PRODUCT_CHANGES}}. Instructions: Classify the retention curve using the shape, not raw numbers—a flat tail above 20% at W12 signals a healthy product; a steady decline is a leak. Power-user threshold uses percentile-based definition (e.g., top 10% by key action). Activation bottleneck must cite which event has the highest correlation with D7 retention, acknowledging correlation vs causation. Intervention hypotheses must be testable, each paired with a success metric. Avoid false precision—ranges beat single numbers when data is noisy. Output Format: Seven Markdown sections. Cohort table with six cohorts maximum. Narrative sections ≤150 words each. Intervention hypotheses as a ranked list with columns (hypothesis, expected lift, effort, test design). Quality Rules: Never call a product retained based on D1 alone. Never confuse survivor bias for improvement. Always distinguish activated cohorts from non-activated. Flag small-sample cohorts with a warning. Anti-Patterns: Do not average cohorts—loses signal. Do not report retention without defining the 'active' event. Do not propose generic interventions like 'add more emails'. Do not ignore the cohort-mix effect when overall retention shifts.
User Message
Analyze retention. Product type: {&{PRODUCT_TYPE}}. Cohort data: {&{COHORT_DATA}}. Events: {&{EVENT_TAXONOMY}}. Segments: {&{SEGMENTS}}. Product changes: {&{PRODUCT_CHANGES}}.

About this prompt

Performs a disciplined retention cohort read using the Andrew Chen L-curve framework, Amplitude's retention playbook, and the smile-curve flattening methodology. The prompt maps D1/D7/D30 and weekly cohorts, identifies whether retention is stabilizing, classifies power users vs churners, and diagnoses activation bottlenecks. Output includes curve diagnosis, cohort comparison narrative, three hypotheses for intervention, and next-analysis questions. Built for growth PMs and data analysts.

When to use this prompt

  • check_circleGrowth PMs reviewing weekly retention trends
  • check_circleData analysts preparing retention reviews for leadership
  • check_circleProduct leaders diagnosing activation bottlenecks

Example output

smart_toySample response
Curve Classification: Smile curve emerging—W4 through W12 show a slight uptick from 18% to 22%...
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.