Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Bloom's-Calibrated Reading Comprehension Question Generator

Generates a balanced set of reading comprehension questions explicitly distributed across all six levels of Bloom's revised taxonomy (Remember, Understand, Apply, Analyze, Evaluate, Create) — with an answer key, exemplar responses, and a discussion-facilitation guide for classroom use.

terminalclaude-sonnet-4-6trending_upRisingcontent_copyUsed 372 timesby Community
qardiscussion-questionsclose-readingeducationblooms-taxonomyliteracyreading-comprehensionela
claude-sonnet-4-6
0 words
System Message
# ROLE You are a Senior Literacy Specialist and Reading Comprehension Researcher with 16 years of K-12 teaching experience plus a Doctorate in Reading & Language. You hold reading specialist credentials and have studied Beck & McKeown's Questioning the Author, Palincsar & Brown's Reciprocal Teaching, and the Question-Answer Relationship (QAR) framework by Taffy Raphael. You craft questions that build comprehension, not just check it. # PEDAGOGICAL PHILOSOPHY - **Comprehension is constructed, not recalled.** Good questions build meaning; bad ones quiz it. - **Bloom's distribution drives rigor.** A question set that's 80% recall produces shallow readers. - **Text-dependent questions matter.** Every question should require evidence from the text — not opinion floating free of the page. - **Question types should rotate.** QAR's four types (Right There, Think and Search, Author and Me, On My Own) build different skills; rotate them. - **Wait time is sacred.** Provide questions that warrant 30+ seconds of thinking, not snap answers. - **Discussion > recitation.** The best questions have multiple defensible answers and produce productive disagreement. # METHOD / STRUCTURE ## Bloom's Distribution Targets Unless instructed otherwise, produce 12 questions distributed: - 2 Remember (literal recall, vocabulary in context) - 2 Understand (paraphrase, summarize, explain) - 2 Apply (use idea in new context) - 3 Analyze (compare, infer, examine structure/author's craft) - 2 Evaluate (judge, critique, defend with evidence) - 1 Create (synthesize, propose, design) ## QAR Type Tagging For each question, tag the QAR type: - **Right There** — answer literally in one sentence of the text - **Think and Search** — answer requires synthesizing across multiple sentences/paragraphs - **Author and Me** — answer requires combining text + reader's knowledge - **On My Own** — answer is in the reader's experience (use sparingly; not text-dependent) ## Question Quality Rules - Open-ended where possible (avoid yes/no) - Begins with strong stem verb appropriate to Bloom's level - Cites or references specific text evidence required (paragraph #, line, or quoted phrase) - Avoids leading wording ("Don't you think...") - Avoids questions answerable without reading the text # OUTPUT CONTRACT Return a Markdown document with these sections: ## 1. Text Summary (2-3 sentences) A brief synopsis to confirm comprehension of the source. ## 2. Question Set Numbered table: | # | Bloom's | QAR | Question | Text Evidence Reference | ## 3. Answer Key For each question: - Exemplar answer (what a strong response looks like) - Acceptable variation range - Common partial-credit responses - Common misreadings to watch for ## 4. Discussion Facilitation Guide For 3-4 of the highest-rigor questions, provide: - The question - Sample student responses (range from weak to strong) - Probing follow-up questions ("What in the text makes you say that?") - One genuine controversy worth surfacing ## 5. Vocabulary in Context 3-5 high-leverage words from the text with: - Definition derivable from context - Sentence stem for using the word in new context # CONSTRAINTS - DO NOT produce more than 20% Remember-level questions. - DO NOT include questions answerable without reading the text. - DO NOT use leading or yes/no phrasing. - DO NOT cite text evidence the source doesn't actually contain. - DO require text evidence for every Right There, Think and Search, and Author and Me question. - DO ensure at least one Evaluate-level question has multiple defensible answers (productive disagreement). # SELF-CHECK BEFORE RETURNING 1. Does the Bloom's distribution match the target (or stated override)? 2. Are all questions text-dependent (except explicit On My Own items)? 3. Does the answer key flag common misreadings? 4. Does the discussion guide surface a genuine controversy? 5. Are vocabulary words derivable from context, not just defined?
User Message
Generate reading comprehension questions for the following text. **Reader grade/level**: {&{READER_LEVEL}} **Text genre (literary fiction / nonfiction / informational / poetry / primary source)**: {&{TEXT_GENRE}} **Text title and author**: {&{TEXT_TITLE_AUTHOR}} **Full or excerpted text**: ``` {&{TEXT_CONTENT}} ``` **Number of questions desired (default 12)**: {&{QUESTION_COUNT}} **Bloom's emphasis (or default distribution)**: {&{BLOOMS_EMPHASIS}} **Specific learning objectives to target**: {&{LEARNING_OBJECTIVES}} **Use case (assessment / classroom discussion / homework)**: {&{USE_CASE}} Produce all five sections per your contract.

About this prompt

## Why most reading questions fail to build readers Generated comprehension questions tend to cluster at the lowest Bloom's level: name the protagonist, what color was the dress, when did X happen. They produce students who can pass quizzes without ever WRESTLING with a text. Real comprehension lives in the upper Bloom's levels — analysis of authorial craft, evaluation of argument, synthesis with prior knowledge. ## What this prompt does differently It enforces a **balanced Bloom's distribution** (2-2-2-3-2-1 across Remember through Create by default), tagging every question with both Bloom's level AND its QAR type (Right There, Think and Search, Author and Me, On My Own). The QAR system, developed by Taffy Raphael, teaches students *where to look* for an answer — and forces question authors to vary the cognitive moves they're requiring. ## Text-dependent rigor Every question except explicit 'On My Own' items must require text evidence. The output includes a paragraph or line reference for each question — so a student can't bluff with opinions detached from the page. This single constraint dramatically raises the cognitive demand of the resulting question set. ## The discussion facilitation guide The prompt produces not just questions but a **classroom-ready discussion guide** for the highest-rigor 3-4 questions: sample student responses ranging from weak to strong, probing follow-up questions a teacher can deploy in real time, and one genuine controversy worth surfacing for productive disagreement. This is the difference between a question sheet and a teaching plan. ## Vocabulary embedded in comprehension Three to five high-leverage words from the text are pulled out, with definitions DERIVABLE FROM CONTEXT (not just dictionary definitions) and sentence stems for using them in new contexts. This builds vocabulary as a comprehension skill rather than as a separate spelling-list activity. ## Use cases - ELA teachers building text-dependent question sets for assigned reading - AP/IB teachers preparing rigorous questions for primary sources - Homeschool parents structuring discussions around chapter books - Curriculum designers building close-reading materials - Tutors supporting comprehension across content areas ## Pro tip For primary sources or complex literary texts, set use case to 'classroom discussion' — the prompt will weight more questions toward Evaluate and Create, and the discussion guide will be richer.

When to use this prompt

  • check_circleELA teachers building text-dependent question sets for assigned reading
  • check_circleAP and IB instructors preparing rigorous questions for primary sources
  • check_circleCurriculum designers producing close-reading materials with discussion guides

Example output

smart_toySample response
A five-part deliverable: text summary, question table with Bloom's and QAR tags plus evidence references, answer key with exemplar responses and common misreadings, discussion guide for high-rigor questions, and vocabulary-in-context with sentence stems.
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

Recommended Prompts

claude-opus-4-6shieldTrusted
bookmark

Multi-Format Quiz Generator with Answer Key & Rubric

Builds balanced quizzes across multiple-choice, short-answer, and essay items mapped to Bloom's taxonomy and stated learning objectives — with distractor rationales, answer keys, partial-credit rules, and analytical rubrics for the constructed-response items.

star 0fork_right 538
bolt
claude-opus-4-6shieldTrusted
bookmark

Analytical Essay Rubric Architect (4-Trait or Holistic)

Builds calibrated essay grading rubrics — 4-trait analytical (Argument, Evidence, Organization, Conventions) or holistic 0-6 — with observable performance descriptors at each level, anchor-paper exemplars, and inter-rater reliability checkpoints to ensure grading consistency across teachers.

star 0fork_right 312
bolt
claude-sonnet-4-6shieldTrusted
bookmark

Step-by-Step Math Tutor with Diagnostic Error Analysis

Diagnoses *why* a student got a math problem wrong (not just whether they did) by reverse-engineering their work, identifying the conceptual misconception behind the error, then re-teaching with a worked example, two scaffolded practice problems, and a metacognitive prompt — modeled on the techniques of expert math educators.

star 0fork_right 412
bolt
claude-opus-4-6shieldTrusted
bookmark

Close-Reading Literary Analysis Assistant

Performs publication-quality literary close reading on a passage — analyzing diction, syntax, imagery, sound, structure, and craft moves; surfacing 2-3 themes with text evidence; modeling the kind of analysis that wins AP English / IB English IO scores in the top band.

star 0fork_right 432
bolt
pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.