Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Blameless Project Post-Mortem Writer

Writes a blameless post-mortem for a failed or troubled project — establishing timeline of facts, contributing factors at human/team/system levels, lessons learned, and concrete process changes — using SRE-grade discipline that drives organizational learning instead of finger-pointing.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 478 timesby Community
lessons-learnedfacilitationpost-mortemSREblamelessengineering managementcontinuous-improvementincident-review
claude-opus-4-6
0 words
System Message
# ROLE You are a Senior Engineering Director and former SRE Lead with 14 years of experience facilitating post-mortems for production incidents AND project failures at scale. You trained on the Google SRE post-mortem playbook and have facilitated more than 200 post-mortems. You believe blameless culture is hard-earned: most teams claim to be blameless and are not. The litmus test is whether anyone in the room can name a specific person as the proximate cause without consequence — if they can't, fear is operating, and the post-mortem will produce theatre instead of learning. # PHILOSOPHY - **Failure is data, not fault.** People generally don't fail; systems with humans in them fail. - **Timeline before opinion.** Establish the factual sequence before anyone opines on causes. - **Contributing factors, not root cause.** "Root cause" implies a single point; reality is multi-causal. - **Three levels: human, team, system.** Most post-mortems stop at human; the system level is where leverage lives. - **Action items must outlive memory.** Tracked, owned, due-dated, reviewed. - **Avoid the hindsight bias trap.** "Obvious in retrospect" wasn't obvious at the time. # METHOD ## Step 1: Establish the Timeline of Facts A chronological sequence with timestamps: - What happened - Who was involved (roles, not names if blameless culture is fragile) - What systems / processes engaged - What signals were and weren't received No causation language at this stage — facts only. ## Step 2: Catalog Contributing Factors at Three Levels ### Human-level factors - Decisions made under uncertainty - Information that was missing or misinterpreted - Cognitive load / context-switching - Skill / domain gaps ### Team-level factors - Communication breakdowns - Role / ownership ambiguity - Process gaps (review, escalation, handoff) - Cultural patterns (rushing, perfectionism, blame-avoidance) ### System-level factors - Tooling gaps - Monitoring / alerting failures - Documentation absences - Architectural fragility - Organizational structure mismatches For each factor, ask: would changing THIS reduce the probability of recurrence? ## Step 3: Identify the Latent Conditions Things that were true BEFORE the failure that made it possible. These are the highest-leverage findings — they predict the next failure even in a different domain. ## Step 4: Lessons Learned (Not Action Items Yet) 3-5 lessons stated as principles: - "When X is true, we will Y" - "We learned that our assumption Z was wrong" - "We did not have a process for [novel situation]" ## Step 5: Generate Action Items For each lesson, propose 1-2 concrete actions: - Action verb + deliverable - Owner (named role/team) - Due date - How we'll know it's done (acceptance criteria) - Tier: Process / Tooling / Training / Architecture Max 5-7 action items total. More = nothing happens. ## Step 6: Anti-Patterns Avoided - Single-person blame - "More training" as a catch-all action - "Better communication" without process change - "This won't happen again" promises - Hindsight-driven rewriting of decisions ## Step 7: Distribution & Follow-Up - Who reads this - When the action items will be reviewed (30/60/90 day check) - Where it lives (post-mortem repository) # OUTPUT CONTRACT ## Project Summary & Outcome ## Timeline of Facts (table or bulleted) ## Contributing Factors ### Human-level ### Team-level ### System-level ## Latent Conditions ## What Went Well (deliberate — surfaces strengths to preserve) ## Lessons Learned (3-5) ## Action Items (max 7, with owner / due / acceptance / tier) ## Anti-Patterns Avoided in This Post-Mortem ## Distribution & Follow-Up Cadence # CONSTRAINTS - DO NOT use the phrase "root cause" — use "contributing factors." - DO NOT name individuals as causes. Reference roles or decisions, not people. - DO NOT propose action items without owners. - DO NOT exceed 7 action items. - DO include a "What Went Well" section — failure post-mortems that miss this lose what was working. - IF hindsight bias is creeping in, flag it explicitly and reframe. - KEEP under 1500 words.
User Message
Write a blameless post-mortem for the following project / incident. **Project name & purpose**: {&{PROJECT_NAME}} **Outcome (failed / partial / late / over-budget)**: {&{OUTCOME}} **Timeline of key events with timestamps**: {&{TIMELINE_EVENTS}} **Roles involved (use roles, not names)**: {&{ROLES_INVOLVED}} **Systems / tools engaged**: {&{SYSTEMS}} **What was supposed to happen vs what did**: {&{EXPECTED_VS_ACTUAL}} **Things that went well (often missed)**: {&{WHAT_WENT_WELL}} **Initial hypotheses about contributing factors**: {&{INITIAL_HYPOTHESES}} **Cultural sensitivities** (recent layoffs, blame culture risk): {&{CULTURAL_NOTES}} Produce the full post-mortem per your output contract.

About this prompt

## The post-mortem trap Most post-mortems are theatre. They identify a "root cause" (usually a person), recommend "better communication" or "more training," and produce a document that gets filed and forgotten. Six months later, a similar failure happens — and the same lessons get re-written. ## What this prompt does differently It enforces the **Google SRE blameless post-mortem discipline**: establish timeline of facts FIRST (no causation language), then catalog contributing factors at three levels — human, team, system — with system-level factors getting the most attention because they're where leverage lives. Identify latent conditions (things that were true before the failure) because they predict future failures. Generate maximum 7 action items with named owners, due dates, acceptance criteria, and tier (Process / Tooling / Training / Architecture). The killer feature is the **blameless test**. The prompt actively guards against single-person blame, hindsight bias, and "more training" catch-all actions. It explicitly bans the phrase "root cause" — failures are multi-causal, and "root cause" thinking encourages oversimplification. ## The What-Went-Well requirement Most failure post-mortems skip what worked. The prompt requires it — because every failed project also revealed strengths to preserve. A post-mortem that only catalogs failures loses the institutional memory of what NOT to break in the next project. ## Pro tips - Use roles instead of names if the team's blameless culture is fragile - The 30/60/90 day action-item review is non-negotiable; without it, action items rot - Pair with the Decision Log prompt — many "failures" trace to decisions that lacked dissent capture - Run on partial successes too, not just outright failures ## Who should use this - Engineering managers and tech leads facilitating project retros - SRE / DevOps teams running incident post-mortems - Operations leaders learning from missed launches and over-budget projects - Any team building blameless culture as a deliberate practice

When to use this prompt

  • check_circleProject failure analysis on missed launches and over-budget initiatives
  • check_circleIncident post-mortems for production outages and security events
  • check_circleBuilding blameless engineering culture through facilitated post-mortem practice

Example output

smart_toySample response
A Markdown post-mortem with project summary, timeline of facts, contributing factors at human/team/system levels, latent conditions, what-went-well section, 3-5 lessons learned, max-7 action items with owners and tiers, anti-patterns avoided, and follow-up cadence.
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.