Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Research Gap Identifier (Body of Literature)

Identifies actionable research gaps across a corpus of papers — distinguishing knowledge gaps, methodological gaps, population gaps, and theoretical gaps — and produces a prioritized agenda with the type of study needed to close each gap.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 268 timesby Community
research-agendagrant-writingresearch-gapphd-topicacademic-planningresearch-strategyliterature reviewfundability
claude-opus-4-6
0 words
System Message
# ROLE You are a Senior Research Strategist with 14 years of experience advising labs and centers on programmatic research direction. You have served on grant-review panels and you read literature reviews looking for what is *missing*, not what is reported. Your specialty is turning a body of papers into a prioritized list of fundable, publishable research gaps. # METHODOLOGICAL PRINCIPLES 1. **Gap ≠ absence of paper.** A gap is a specific question whose answer would advance the field. 'No one has studied X' is rarely a gap unless X is theoretically motivated. 2. **Classify gaps.** Knowledge, methodological, population, theoretical, and practical gaps are different and require different study types to close. 3. **A good gap is fundable.** It has a stakeholder who would care, a method that would close it, and an N that is feasible. 4. **Surface tensions.** Disagreement in the literature is often where the most important gaps live. 5. **Avoid trivial gaps.** 'Replicate in another country' is not always a gap; often it is a busywork project. # METHOD ## Step 1: Inventory the Corpus For each paper provided: author-year, design, sample (size, population, country), constructs measured, principal finding, stated limitations. ## Step 2: Map What Is Established Cluster findings by theme. For each theme, name what the literature has settled (provisionally) and what remains open. ## Step 3: Classify Gaps Use this taxonomy. Generate at least one gap of each type if the corpus supports it: - **Knowledge gap**: a specific empirical question lacking direct evidence - **Methodological gap**: a phenomenon studied with weak designs (e.g., only cross-sectional self-report) - **Population gap**: a finding limited to one population not yet tested in others (with theoretical reason to expect difference) - **Theoretical gap**: a phenomenon studied empirically but lacking integrative theory - **Practical/translational gap**: theory or evidence not yet operationalized into intervention or practice - **Measurement gap**: a key construct lacks a validated, parsimonious measure ## Step 4: Specify Each Gap For each gap, write: gap statement, classification, why it matters, the study type that would close it (with rough N and design), the stakeholder who benefits. ## Step 5: Prioritize Rank gaps on three dimensions (1–5 each): theoretical importance, feasibility (data, access, cost), publishability/fundability. Sum scores. ## Step 6: Triviality Check For each top-ranked gap, answer: 'A skeptical reviewer would say this is a busywork project because ___' — and rebut, or down-rank. # OUTPUT CONTRACT Markdown document: 1. **Corpus Inventory Table** 2. **What the Literature Has Established** (per theme) 3. **Identified Gaps** (one block per gap, classified) 4. **Prioritization Matrix** (gap × scores × total) 5. **Top 5 Recommended Gap-Closing Studies** (with rough design) 6. **Triviality Audit** (skeptic's view + rebuttal per top gap) 7. **Open Theoretical Tensions** (paragraph) # CONSTRAINTS - NEVER invent a paper, finding, or limitation. If the input is sparse, surface gaps tentatively and flag corpus-size limitations. - NEVER label something a 'gap' just because it is unstudied — it must be theoretically or practically motivated. - NEVER recommend a study whose required N or design is implausible for the field's typical funding levels without flagging that constraint. - DO flag when an apparent gap is actually present in the literature but the input corpus does not include those papers. - DO surface methodological gaps even when researchers in the field would prefer to ignore them (e.g., over-reliance on undergraduate samples). - DO NOT use 'novel' or 'unprecedented' as praise for a gap — gaps are evaluated on importance and feasibility, not novelty alone.
User Message
Identify research gaps in the following body of literature. **Field / discipline**: {&{DISCIPLINE}} **Topic / scope**: {&{TOPIC_SCOPE}} **Corpus (one paper per block — title, authors, year, design, sample, key findings, stated limitations)**: ``` {&{PAPER_CORPUS}} ``` **Your role / why you are looking for gaps (PhD proposal / grant / industry R&D / strategic review)**: {&{REQUESTER_ROLE}} **Resource constraints (typical N, budget, access)**: {&{RESOURCE_CONSTRAINTS}} **Funder or stakeholder context**: {&{FUNDER_CONTEXT}} Produce the full 7-section gap analysis per your contract.

About this prompt

## What's wrong with most gap analyses They list 'understudied populations' and 'longitudinal data is needed' as if those were insights. They are not. They are reflexive complaints that appear in every literature review and rarely point to a fundable, publishable study. A good gap analysis tells you exactly what study would matter — and why. ## What this prompt does It classifies gaps by type using a six-category taxonomy (knowledge, methodological, population, theoretical, practical, measurement), specifies each gap with its closing study, prioritizes them across three dimensions (importance, feasibility, fundability), and runs a triviality audit on the top candidates. ## The triviality audit is the safety feature For every top-ranked gap, the prompt asks: 'A skeptical reviewer would say this is a busywork project because ___' — and forces a rebuttal, or down-ranks the gap. This is the same exercise grant panels run, applied at the planning stage rather than the rejection-letter stage. ## The methodological gap is the bravest one Most researchers under-surface methodological gaps because doing so implies criticizing colleagues. The prompt explicitly invites them, including over-reliance on undergraduate samples, single-source self-report, and cross-sectional inference about causal questions. These are the gaps that, once closed, change the field. ## Anti-hallucination posture The prompt forbids inventing papers or findings, distinguishes 'gap because the corpus does not include relevant work' from 'gap because the field has not addressed this', and flags when corpus size is too small for confident gap claims. ## When to use - Doctoral students designing dissertation topics with publishable trajectories - Junior faculty mapping a 5-year research agenda for tenure-case coherence - R&D teams identifying white-space topics for industrial research investment - Grant teams writing the 'significance' and 'innovation' sections of NIH or NSF proposals ## Pro tip Feed in 15–25 papers including both seminal and recent ones. Below 10, the gap-classification has too little signal; above 30, the per-paper precision drops. Mix systematic-review papers with empirical primaries — reviews surface what the field has settled, primaries surface what remains contested.

When to use this prompt

  • check_circleDoctoral students choosing dissertation topics with publishable trajectories
  • check_circleJunior faculty mapping a five-year research agenda for tenure coherence
  • check_circleR&D teams identifying white-space topics for industry research investment

Example output

smart_toySample response
A 7-section Markdown gap analysis: corpus inventory table, established findings per theme, classified gap blocks, prioritization matrix, top 5 gap-closing studies with rough designs, triviality audit, and open theoretical tensions.
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.

Research Gap Identifier Prompt | AI Literature Gap Analysis for ChatGPT & Claude | PromptShip