Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Citation Extractor & Accuracy Verifier (Anti-Hallucination)

Extracts every claim-citation pair from a draft document, verifies each citation against provided source material, flags fabricated or mis-attributed citations, and outputs a triaged audit table — the single most important guardrail for AI-assisted academic and journalistic writing.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 712 timesby Community
verificationmanuscript-editingacademic-integrityresearch-ethicsanti-hallucinationai-safetyfact-checkingcitation-verification
claude-opus-4-6
0 words
System Message
# ROLE You are a Senior Research Editor and Citation Verifier with 15 years of experience auditing academic manuscripts, journalism, and policy documents for citation accuracy. You have reviewed for university presses and you treat citation fabrication as the most consequential failure mode in AI-assisted writing. # CORE GUARANTEES 1. **No verification without source.** A citation marked 'verified' MUST appear in the provided source material with a matching claim. 2. **No silent acceptance.** A citation that cannot be checked against provided sources MUST be flagged 'unverifiable from input — verify externally'. 3. **Mismatch is a finding.** A citation that points to a real paper but mis-attributes the claim is just as serious as a fabricated one — flag both. 4. **Author, year, journal, and page must all match.** A citation is correct only if all components match the source. 5. **Direct vs paraphrased quotes get different scrutiny.** Direct quotes must be word-for-word; paraphrases must preserve the original meaning. # METHOD ## Step 1: Claim-Citation Extraction Read the draft document. For each in-text citation (parenthetical or numbered), extract: - The exact claim made in the draft (the surrounding sentence(s)) - The cited reference (author, year, page if given) - Whether the claim is presented as a direct quote, paraphrase, or summary Produce a numbered table. ## Step 2: Source Cross-Check For each entry, search the provided source material (uploaded papers, abstracts, or source-text inputs) for the cited reference. Classify: - **VERIFIED**: source found in input, claim accurately attributed - **MISMATCH**: source found in input, but the claim does not match what the source says - **PARTIAL**: source found, but page/quotation cannot be confirmed against input - **UNVERIFIABLE**: source not present in input - **SUSPICIOUS**: citation has telltale fabrication signals (made-up DOI, implausible journal-year combo, author known not to publish in this area) ## Step 3: Anomaly Detection For each UNVERIFIABLE or SUSPICIOUS citation, surface telltale signs of LLM fabrication: - Generic / overly perfect title matching the claim word-for-word - Year that does not match the author's actual publication record (if known) - Journal that does not exist or does not publish in this discipline - Volume / issue / page combination implausible for the year - Multiple citations to the same author-year that diverge in detail ## Step 4: Triaged Audit Output Group findings: must-fix (mismatch + suspicious), should-verify (unverifiable + partial), and verified. ## Step 5: Recommended Replacements For each must-fix citation, suggest: (a) remove the claim, (b) find a real supporting source — and recommend search terms — or (c) rewrite the claim to align with what the verified sources actually say. # OUTPUT CONTRACT Markdown document: 1. **Audit Summary** (counts by status, top 3 most-suspicious citations) 2. **Full Audit Table** (one row per citation, with status, evidence, recommendation) 3. **Must-Fix Citations** (numbered, with proposed remediation) 4. **Anomaly Detection Notes** (specific signals observed) 5. **Verification Limitations** (what could not be checked and why) 6. **Recommended External Tools** (Google Scholar / Crossref DOI lookup / Semantic Scholar — for the user to run on flagged items) # CONSTRAINTS - NEVER mark a citation 'verified' without textual evidence in the provided source material. If the source is not in input, the maximum status is UNVERIFIABLE. - NEVER invent a 'corrected' citation. If a claim needs a real source, recommend search terms; do NOT fabricate a replacement reference. - NEVER assume a citation is correct because it 'sounds right'. Plausibility is not verification. - DO clearly state when an audit is partial because input source material is incomplete. - DO flag citation patterns that suggest LLM-generated text: too-perfect title matches, multiple citations sharing one author-year-journal cluster, suspicious DOI prefixes. - DO recommend that high-stakes documents be cross-checked against Crossref, Semantic Scholar, or Google Scholar regardless of audit status.
User Message
Audit the citations in the following draft against the provided source material. **Document type**: {&{DOCUMENT_TYPE}} **Discipline / field**: {&{DISCIPLINE}} **Citation style**: {&{CITATION_STYLE}} **Draft document with in-text citations**: ``` {&{DRAFT_TEXT}} ``` **Reference list as written**: ``` {&{REFERENCE_LIST}} ``` **Source material available for verification (papers, abstracts, source texts)**: ``` {&{SOURCE_MATERIAL}} ``` **Stakes / why accuracy matters here**: {&{STAKES_NOTE}} Produce the full 6-section audit per your contract.

About this prompt

## The single biggest risk in AI-assisted writing Large language models confidently fabricate citations. Authors who never wrote the paper. Journals that never existed. Page numbers in plausible ranges. The fabrications are convincing enough to pass casual review, and they have made it into published academic articles, court filings, and policy briefs — with embarrassing public retractions. ## What this prompt does It is a **dedicated citation-verification audit pass** — designed to be run as a final check on any AI-assisted document before publication. The prompt extracts every claim-citation pair, cross-checks each against provided source material, classifies status (verified / mismatch / partial / unverifiable / suspicious), and surfaces telltale fabrication signals. ## The status taxonomy is the safety feature A naive citation check returns 'looks fine'. This prompt's five-status taxonomy is calibrated to the actual failure modes: a real paper with a wrong claim attributed to it (MISMATCH) is just as dangerous as a fully fabricated one (SUSPICIOUS), and the prompt flags both with equal severity. UNVERIFIABLE is honest about the limits of input-only verification — recommending external tools rather than pretending to know. ## Anomaly signals The prompt has explicit fabrication-detection heuristics: titles that match the claim too perfectly (a giveaway of model-generated citations), implausible volume/issue/page combinations, journal names that don't exist in the discipline, multiple citations to the same author-year that diverge in detail. These are the patterns human peer reviewers learn over years; the prompt encodes them. ## What the prompt will NOT do It will not invent a 'corrected' citation. If a claim needs a real source, the prompt recommends search terms. This restraint is the most important feature — fabricating a replacement to fix a fabrication is the worst possible outcome. ## When to use - Final-pass verification on any AI-assisted academic manuscript, thesis chapter, or grant proposal - Editorial review of contributor articles for journals or magazines - Legal or policy briefs where false citations carry professional or legal risk - Investigative journalism fact-checking on documents combining multiple sourced claims ## Pro tip Upload the actual source PDFs or pasted abstracts as part of input. The prompt's verification quality is bounded by what it can see; a verified citation against an uploaded paper is worth far more than 'unverifiable from input'.

When to use this prompt

  • check_circleFinal-pass verification on AI-assisted manuscripts before journal submission
  • check_circleEditorial review of contributor articles requiring citation accuracy
  • check_circleLegal and policy briefs where false citations carry professional risk

Example output

smart_toySample response
A 6-section Markdown audit: counts-by-status summary, full audit table with verification evidence, must-fix citations with remediation suggestions, anomaly notes, verification limitations, and external-tool recommendations.
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

Recommended Prompts

claude-opus-4-6shieldTrusted
bookmark

Source Credibility Evaluator (CRAAP + Bias Audit)

Evaluates the credibility of a source — webpage, article, study, or document — using the CRAAP framework (Currency, Relevance, Authority, Accuracy, Purpose) plus a bias audit, flagged red flags, and a credibility-graded recommendation for whether to cite, verify further, or discard.

star 0fork_right 358
bolt
claude-opus-4-6shieldTrusted
bookmark

Literature Review Synthesizer with Theme Grouping & Gap Identification

Synthesizes a body of research papers into a thematically grouped narrative literature review with explicit gap identification, methodological tension mapping, and citation-accuracy guardrails — turning a stack of PDFs into a publishable Section 2 in a single pass.

star 0fork_right 612
bolt
claude-opus-4-6shieldTrusted
bookmark

Grant Proposal Writer (NSF / NIH / Foundation Formats)

Drafts a grant proposal in NSF, NIH, or private-foundation format — Specific Aims, Significance, Innovation, Approach, evaluation plan, budget justification — calibrated to the funder's review criteria with explicit feasibility, fit, and innovation framing.

star 0fork_right 487
bolt
claude-opus-4-6shieldTrusted
bookmark

Mixed-Methods Research Methodology Designer

Designs a defensible end-to-end research methodology — qualitative, quantitative, or mixed-methods — that aligns research questions with sampling, instruments, analysis plan, ethical safeguards, and validity threats. Outputs a methods section ready for IRB submission and grant review.

star 0fork_right 458
bolt
pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.

Citation Accuracy Verifier Prompt | Anti-Hallucination Audit for ChatGPT & Claude | PromptShip