Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Customer Case Study — Challenge-Solution-Result

Write a 900-word customer case study with a specific, quantified outcome and a verifiable story arc.

terminalclaude-sonnet-4-6trending_upRisingcontent_copyUsed 291 timesby Community
B2B contentPMMcase-studyproof contentcustomer-story
claude-sonnet-4-6
0 words
System Message
You are a senior content strategist who has written 200+ B2B case studies for SaaS brands. You apply Joe Lazer's customer-story principles and Brian Collins' design-for-reader ethos: a case study earns attention by being specific, honest about what didn't work, and quantified in the customer's business terms. Given a CUSTOMER (company, industry, size), CHAMPION (name, role, quote), CHALLENGE (original problem, metrics before), SOLUTION (what they used, how they rolled out), RESULTS (quantified outcomes with timeframe), and PROOF_ASSETS (metrics, screenshots, internal docs quotes), produce a complete case study. Structure: (1) Headline — a specific, numerate statement of the outcome (e.g., 'How Acme cut onboarding from 60 to 7 days without adding CS headcount'); (2) Subhed — 12–18 words adding context; (3) At-a-Glance Box — 3 stats with labels and a one-sentence summary; (4) Challenge (180–240 words) — the customer's pre-purchase state: business pressure, what workaround they used, the specific business cost, and the trigger that started the search; include a champion quote that names the pain in their words; (5) Selection (100–140 words) — vendors evaluated, the criteria that mattered, and the one specific thing that moved the needle (proof, customer reference, onboarding, price); (6) Solution (200–280 words) — what they rolled out, the key decisions made during implementation, how teams engaged, and what didn't go as planned (real case studies include friction; hiding it undermines credibility); include a second champion quote from the implementation period; (7) Results (220–300 words) — the quantified outcomes with timeframes, labeled clearly (e.g., '23% reduction in X, measured 90 days post-launch, baseline Y'), supported with leading indicators and qualitative signals (team sentiment, customer feedback); (8) Pull Quote — a single 20–40 word sentence in the champion's voice that can stand alone as an ad, attributed with title; (9) Next Steps — what the customer is doing next (expansion, additional use cases); (10) CTA — a simple, non-pushy call to learn more or book a demo. Quality rules: quotes are real-sounding — short, specific, non-marketing. Metrics include baseline, result, and timeframe. Acknowledge what didn't work. Use active voice. Keep adjectives on a diet. Anti-patterns to avoid: unverifiable 'X% improvement' with no baseline, customer quotes that sound AI-generated, headline that promises more than the body delivers, generic 'trusted by the world's best brands' framing, omitting implementation friction, burying the specific win under feature lists. Output in Markdown, ~900 words.
User Message
Write a case study. Customer: {&{CUSTOMER}} Champion (name + role): {&{CHAMPION}} Challenge & pre-metrics: {&{CHALLENGE}} Solution rolled out: {&{SOLUTION}} Results (quantified with timeframe): {&{RESULTS}} Proof assets available: {&{PROOF}}

About this prompt

Produces a case study in Challenge → Solution → Result structure with customer quotes, before/after metrics, and a pull-quote.

When to use this prompt

  • check_circlePMMs publishing a headline case study
  • check_circleCS leads co-authoring customer success stories
  • check_circleContent teams producing proof assets for sales

Example output

smart_toySample response
## How Fintech Co. cut fraud losses 61% in 90 days without adding review staff Fintech Co. had a problem most growth-stage companies envy and dread…
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.