Source Credibility Evaluator (CRAAP + Bias Audit)
Evaluates the credibility of a source — webpage, article, study, or document — using the CRAAP framework (Currency, Relevance, Authority, Accuracy, Purpose) plus a bias audit, flagged red flags, and a credibility-graded recommendation for whether to cite, verify further, or discard.
About this prompt
When to use this prompt
- check_circleResearchers vetting sources for a literature review or systematic review
- check_circleJournalists fact-checking claims and evaluating advocacy material before publication
- check_circleEducators teaching information literacy with worked examples
Example output
Latest Insights
Stay ahead with the latest in prompt engineering.
ArticleGetting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes
A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.
ArticleAI Prompt Security: What Your Team Needs to Know Before Sharing Prompts
Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.
ArticlePrompt Engineering for Non-Technical Teams: A No-Jargon Guide
You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.
ArticleHow to Build a Shared Prompt Library Your Whole Team Will Actually Use
Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.
ArticleGPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?
We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.
ArticleThe Complete Guide to Prompt Variables (With 10 Real Examples)
Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.
Recommended Prompts
Citation Extractor & Accuracy Verifier (Anti-Hallucination)
Extracts every claim-citation pair from a draft document, verifies each citation against provided source material, flags fabricated or mis-attributed citations, and outputs a triaged audit table — the single most important guardrail for AI-assisted academic and journalistic writing.
Bias-Aware Survey Question Designer (Likert, NPS, Open-Ended)
Designs survey instruments with calibrated response scales, bias-checked wording, attention checks, and validated structural patterns — outputs items in a deployable format with a per-item bias audit and a recommended analysis plan.
Calibrated Evidence-Based Performance Review Writer (Manager → IC)
Writes a manager-authored performance review with evidence-anchored examples, calibrated rating language, balanced strengths and growth areas, and forward-looking development goals — engineered to survive HR calibration meetings without bias-driven critique.
Literature Review Synthesizer with Theme Grouping & Gap Identification
Synthesizes a body of research papers into a thematically grouped narrative literature review with explicit gap identification, methodological tension mapping, and citation-accuracy guardrails — turning a stack of PDFs into a publishable Section 2 in a single pass.
Token Counter
Real-time tokenizer for GPT & Claude.
Cost Tracking
Analytics for model expenditure.
API Endpoints
Deploy prompts as managed endpoints.
Auto-Eval
Quality scoring using similarity benchmarks.