General prompting tips: the 10 habits that compound
Ten universal prompting habits that lift output quality on every task. Be specific, show examples, constrain explicitly, lead with the verb — none individually transformative; together they compound.
Most prompt engineering advice falls into two buckets: specific techniques (Chain-of-Thought, few-shot, ReAct) and vague platitudes ("be clear," "be specific"). This guide is the middle ground — ten habits that aren't techniques exactly, but compound across every prompt you write.
None individually transforms output quality. Stacked together, they separate the prompts that ship from the ones that drift. Each tip here came from watching specific failure modes show up over and over in production.
The whole idea in one line
1. Specificity beats every other technique#
The single highest-leverage habit. "Write a tweet" produces generic output; "Write a tweet for senior engineers about the trade-offs of Postgres vs MongoDB, in 240 characters, ending with a question" produces something usable on the first try.
The model is statistical. Vague prompts pull from a vague distribution; specific prompts pull from a specific one. Every constraint you add narrows the space of plausible outputs.
2. Show, don't describe#
For style, format, or tone — anything subjective — examples beat instructions every time. Three input/output pairs teach the model more about your output style than three paragraphs describing it.
See few-shot prompting for the full pattern. The shortcut: when you find yourself writing "the output should feel like X," replace that sentence with an example output.
3. Tell the model what TO do, not just what NOT to do#
Negative instructions are weaker signals than positive ones. "Don't use marketing-speak" tells the model what to avoid but leaves the alternative space wide open. "Use plain, direct language with concrete nouns and active verbs" tells it what to do — much easier to comply with.
Pair them when you must, but lead with the positive form.
4. Constrain length explicitly#
"Concise" means nothing to the model. "Under 80 words" means something exact. Always specify length in concrete units — words, characters, bullets, sentences, paragraphs.
Without an explicit constraint, models default to a verbose-ish middle. Almost always longer than you wanted, almost never shorter.
5. Lead with the verb#
The first word of your instruction sets the task frame. Summarize, classify, translate, extract, generate, rewrite. Models weight prompt openings; a strong verb upfront grounds the rest of the prompt.
The anti-pattern: "Could you please help me by maybe taking a look at the email below…" — politeness words don't help the model and dilute the signal.
6. Always wrap user input in delimiters#
XML tags, triple quotes, or markdown blocks around any user-supplied content. Two reasons:
- The model can distinguish data from instructions, which improves output consistency.
- It's a baseline mitigation against prompt injection — without delimiters, malicious user input can override your prompt entirely.
Summarize the email below in 3 bullets.
<email>
{{user_email}}
</email>
Summary:7. Anchor the output format with an indicator#
End the prompt with a token that primes the format: JSON: for JSON, Reply: for reply text, Summary: for a summary. Without this, models add preambles — "Here's the JSON you requested:" followed by the JSON. The indicator skips that.
On Claude specifically, combine with prefilling for guaranteed format adherence. See prompting Claude.
8. Specify what to do when uncertain#
Without explicit fallback rules, models invent instead of refusing. Add a clear escape hatch:
- "If the email doesn't mention a refund, output null."
- "If you're uncertain, say 'I don't have enough information.'"
- "If the document doesn't contain the answer, say so explicitly rather than inferring."
This single habit prevents most hallucinations in production.
9. One job per prompt#
If your prompt does five things, it does each of them ~80% as well as a focused prompt would. Split. See prompt chaining. The reliability math: compounding 95%- reliable steps beats one 70%-reliable mega-prompt.
The signal: if your prompt has more than one verb, more than one output type, or more than three constraints sections, you probably have two prompts hiding in one.
10. Test with adversarial inputs from day one#
The "happy path" works. The 5% of weird inputs is what breaks production. Build a small eval set early — ~20 inputs, including 3-5 you suspect are hard. Re-run on every prompt change.
See A/B testing prompts for the workflow. Skipping eval is the most common reason teams ship prompts that regress silently.
Five bonus habits#
- Spell out abbreviations the first time. "PR" could mean pull request, public relations, or Puerto Rico to the model. Disambiguate.
- Use markdown structure (headers, lists) in long prompts. Models respect markdown structure when generating; they also parse it more reliably as input.
- Avoid ambiguous pronouns. "Use it correctly" — what is it? Restate the noun.
- Number sequential steps. Models follow numbered lists more reliably than prose paragraphs full of "then," "next," "finally."
- Re-read your prompt as if you'd never seen the task before. Most ambiguity is invisible to the author. The fresh-eyes pass catches it.
The cost of skipping these
Going further: when these tips aren't enough#
These habits handle the universal failure modes. When you hit a problem they don't solve, reach for specific techniques:
- Output format keeps drifting → few-shot prompting with consistent examples
- Reasoning errors on multi-step tasks → Chain-of-Thought
- Hallucinations on factual content → RAG for grounded retrieval
- Need fresh data the model doesn't have → ReAct with tool use
- Hard reasoning where single-pass fails → self-consistency or a reasoning model
Quick reference#
The 10 habits — in checklist form
1. Specific over vague. Concrete constraints, not adjectives.
2. Show with examples instead of describing in prose.
3. Tell the model what TO do, not just what NOT to do.
4. Constrain length in concrete units (words, bullets, sentences).
5. Lead with a strong verb.
6. Wrap user input in delimiters (XML tags or triple quotes).
7. End with an output indicator that anchors the format.
8. Specify the fallback when the model is uncertain.
9. One job per prompt; chain when you have multiple jobs.
10. Build an eval set early; test on adversarial inputs.
What to read next#
These tips form the foundation. Build on top with techniques — zero-shot, few-shot, Chain-of-Thought. For the universal model knobs, LLM settings. And to make these habits durable across your team, build a team prompt library.
Put this guide to work
Save your prompts, version every change, and share them with your team — free for up to 200 prompts.