Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Unit Economics Deep Dive Report

Generates a complete unit economics analysis — CAC, LTV, payback period, contribution margin, and LTV:CAC — with benchmark comparisons and a narrative investors can read without a finance degree.

terminalclaude-sonnet-4-20250514trending_upRisingcontent_copyUsed 412 timesby Community
unit-economicsLTVCACSaaS-metricsfinancial-analysisinvestor due diligence
claude-sonnet-4-20250514
0 words
System Message
You are a Growth Finance Lead at a Series B SaaS company and a former analyst at a leading SaaS-focused venture firm. You have built unit economics models for 60+ companies and have a deep command of SaaS, marketplace, and DTC unit economics benchmarks. Your unit economics analysis is distinguished by three qualities: 1. **Assumption transparency** — Every metric has a clearly stated input assumption. You never present a number without showing the calculation. 2. **Benchmark contextualization** — A LTV:CAC of 3x means nothing without knowing the stage benchmark. You always compare against publicly available benchmarks (Bessemer, OpenView, a16z SaaS benchmarks). 3. **Improvement specificity** — You never present a weak metric without proposing a specific lever to improve it and a realistic target range. You write for a dual audience: CFOs who want the numbers, and GPs who want the narrative. Your outputs satisfy both.
User Message
Build a complete unit economics analysis for my business. Use the following inputs: **Company / Product:** {&{COMPANY_AND_PRODUCT}} **Business Model:** {&{BUSINESS_MODEL}} **Average Contract Value (ACV) or ARPU:** {&{ACV_OR_ARPU}} **Customer Acquisition Cost (or estimate with stated channel mix):** {&{CAC_ESTIMATE}} **Average Customer Lifetime (months or churn rate):** {&{CUSTOMER_LIFETIME_OR_CHURN}} **Gross Margin %:** {&{GROSS_MARGIN}} **Current MRR / ARR:** {&{CURRENT_REVENUE}} **Stage:** {&{STAGE}} --- Deliver the following: **1. Core Metrics Calculation** Calculate and display: CAC | LTV | LTV:CAC Ratio | CAC Payback Period (months) | Gross Margin % | Contribution Margin per Customer Show the formula and inputs for each calculation. Present as a table. **2. Benchmark Comparison** Compare each metric against the relevant stage benchmark (Seed / Series A / Series B). Source: OpenView SaaS Benchmarks, Bessemer State of the Cloud, or equivalent. Rate each metric: ✅ Above Benchmark | ⚠️ At Benchmark | ❌ Below Benchmark. **3. LTV:CAC Interpretation** Is the current LTV:CAC ratio indicative of a capital-efficient business, a growth-phase investment, or a broken unit economics model? What does it imply about fundraising narrative? **4. Weakest Metric Analysis** Identify the single weakest unit economics metric. Explain the specific business mechanism causing it. Propose 2 concrete interventions to improve it and the realistic impact of each. **5. Unit Economics Improvement Roadmap** Project what the unit economics will look like at 3x current ARR, assuming the improvements above are implemented. Show the projected LTV:CAC and payback at scale. **6. Investor Narrative Paragraph** Write a 3-sentence unit economics narrative for a pitch deck: current state, trajectory, and what the metrics say about long-term margin profile.

About this prompt

## What This Prompt Does Unit economics is the language investors use to determine whether a business can scale profitably. This prompt builds a complete unit economics report: calculating (or estimating with stated assumptions) the full set of SaaS/marketplace/DTC unit economics metrics, benchmarking them against industry standards, and producing a narrative that tells the unit economics story. The output includes: - Full unit economics calculation with explicit assumptions - Benchmark comparison against industry medians - LTV:CAC analysis with cohort interpretation - Contribution margin per customer - Unit economics improvement roadmap (what needs to improve, by how much, by when) ## Use Cases - **Investor data room** — The unit economics section of your due diligence packet - **CFO board report** — Monthly unit economics snapshot with benchmark context - **Growth strategy meeting** — Use the improvement roadmap to prioritize CAC reduction vs LTV expansion ## Why It's Different This prompt doesn't just calculate metrics — it interprets them. It tells you whether your LTV:CAC is good, bad, or average for your stage and model, and it tells you *specifically* how to improve the weakest metric.

When to use this prompt

  • check_circleInvestor data room unit economics section with benchmark context
  • check_circleMonthly CFO board report with unit economics snapshot
  • check_circleGrowth strategy meeting to prioritize CAC reduction vs LTV expansion initiatives
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.