Learn the core patterns (CoT, ReAct, Few-shot) to get reliable outputs from LLMs.
- Learn the core patterns (CoT, ReAct, Few-shot) to get reliable outputs from LLMs.
- What is Prompt Engineering?
- The 80/20 Mental Model
- Pattern #1: Few-Shot (Show, Don’t Tell)
- Pattern #2: Chain-of-Thought (Think in Steps)
- Pattern #3: ReAct (Reason + Act with Tools)
- Guardrails : Roles, Rules and Boundaries
- Evaluate & Iterate: A Simple Workflow
- Common Pitfalls (and Quick Fixes)
- Copy-Paste Cheat Sheet
- FAQ
- FAQ Schema (JSON-LD)
- Further Reading (Sources)
What is Prompt Engineering?
Prompt engineering is the craft of turning fuzzy goals (“make me a plan”) into explicit instructions an AI can follow reliably (“produce a 5-step plan, with deadlines, risks, and next actions”). Good prompts reduce hallucinations, improve accuracy, and make your assistant feel…well…useful.
Researchers and AI providers have converged on a few core patterns that consistently boost quality: Few-shot, Chain-of-Thought (CoT), and ReAct. We’ll cover each with plain-English examples and ready-to-paste templates.
The 80/20 Mental Model
If you remember only this:
- Be specific about outcome & format. (What should the final answer look like?)
- Provide examples. (Models learn from patterns you show them.)
- Let the model think. (Ask for steps, assumptions, or a brief rationale.)
- Use tools when needed. (Search, code, calculators, docs.)
- Evaluate quickly. (Keep test sets; iterate on failures.)
This 80/20 covers most practical tasks you’ll do as a solo builder or small team.
Pattern #1: Few-Shot (Show, Don’t Tell)
Why it works: Large language models perform better when you show examples of what “good” looks like. This is called in-context learning.
Template (Few-shot classification):
You are a precise assistant that labels customer messages as {Bug, Feature Request, Billing, Chitchat}.
Here are examples:
Q: "My card was charged twice!" → A: Billing
Q: "The app crashes when I click save." → A: Bug
Q: "Could you add dark mode?" → A: Feature Request
Q: "lol thanks" → A: Chitchat
Now label only with one of these four categories.
Q: "{{NEW_MESSAGE}}"
A:
Template (Few-shot transformation):
Task: Convert informal notes into a 3-bullet executive summary.
Examples:
Notes: "meeting ok, ship v2 next week; risk: API limits; owner: Sam"
Summary:
• Ship v2 next week
• Risk: API limits
• Owner: Sam
Notes: "{{RAW_NOTES}}"
Summary:
Pro tips
- Keep examples short and diverse (cover edge cases).
- Put your label set or style guide right next to examples.
- If the model hesitates, add a final line: “Answer with exactly one label.”
Pattern #2: Chain-of-Thought (Think in Steps)
Why it works: Asking the model to reason in intermediate steps improves performance on math, planning, and logic. In research, this is called Chain-of-Thought prompting.
Note: Some providers won’t reveal full internal reasoning traces for safety; you can still request a brief, structured rationale (steps, assumptions, sources) without exposing raw “thoughts.”
Template (CoT planning with concise rationale):
Goal: Plan a 4-week study roadmap for {{TOPIC}} for a busy professional (1h/day).
Produce:
1) Weekly objectives (Week 1–4)
2) Daily 1-hour plan
3) 5 key resources (links)
4) Risks + mitigations
Before the final plan, outline your assumptions in 4 short bullet points.
Keep the rationale brief (<= 120 words).
Then present the plan in markdown tables.
Template (CoT calculation with checks):
Task: Compute the monthly payment for a ${{AMOUNT}} loan at {{RATE}}% APR over {{YEARS}} years.
Show:
- Variables + units
- Formula used
- Final number (rounded)
- A quick sanity check (1 line)
Pro tips
- Use phrases like “outline assumptions briefly” or “show key steps” rather than “think step-by-step” if your provider suppresses full reasoning by default.
- Add sanity checks or acceptance tests (“If X > Y, flag a warning.”).
Pattern #3: ReAct (Reason + Act with Tools)
Why it works: Many tasks require thinking and doing (e.g., searching the web, calling APIs). ReAct interleaves reasoning with actions: the model plans, calls a tool, reads results, updates the plan, and continues. This cuts hallucinations and increases factual accuracy.
Template (ReAct with tool calls — pseudo-format):
Role: You are an analyst assistant. You can use tools: {web_search, code, calculator}.
When solving:
- Think about what you need to know (short bullets).
- Choose a tool only if needed; explain why.
- After getting results, update your plan.
- End with a sources list.
User question: "{{QUESTION}}"
Tool contract tip: Define clear tool names, inputs/outputs, and when to use them. Example: web_search(query: string) -> top_k results with titles+urls.
Guardrails : Roles, Rules and Boundaries
Good prompts set boundaries:
- Role: “You are a financial data analyst…”
- Audience & tone: “Write for beginners, professional but friendly.”
- Scope: “Only cover ETFs available in Canada; avoid individual stock picks.”
- Format: “Return a markdown table and a 120-word summary.”
- Safety & disclosure: “Cite sources. If uncertain, say so.”
These constraints align with provider best-practices and reduce hallucinations.
Evaluate & Iterate: A Simple Workflow
- Define success. A rubric (e.g., Accuracy, Completeness, Clarity, References).
- Create a tiny test set (10–30 real prompts).
- A/B test two prompt variants; keep the winner.
- Log failures, add examples, tighten rules, or add a tool.
- Version your prompts like code (v1.2, v1.3…).
This “prompt-as-product” loop is how teams make assistants predictable over time.
Mini-rubric (copy-paste):
Rate 1–5 for: Accuracy • Evidence/Sourcing • Completeness • Style/Format
If any score <4, note why and suggest a prompt change.
Common Pitfalls (and Quick Fixes)
- Vague goals → Vague answers
Fix: Specify audience, scope, and format. - No examples
Fix: Add 2–4 short examples (Few-shot). - Hallucinated facts
Fix: Use ReAct with a web/tool step; require citations. - Messy outputs
Fix: Demand a fixed schema (tables, JSON, bullets). - Flaky results over time
Fix: Keep a test set and A/B your prompts monthly.
Copy-Paste Cheat Sheet
Universal task scaffold
You are {{ROLE}}. Audience: {{AUDIENCE}}. Goal: {{GOAL}}.
Constraints:
- Scope: {{WHAT_TO_INCLUDE}}; exclude {{WHAT_NOT_TO_INCLUDE}}
- Format: {{DELIVERABLE_FORMAT}} (max {{LENGTH}}), with headings and bullet points
- Quality: Be accurate, cite sources with links; if uncertain, ask for clarification
Now produce the deliverable.
Few-shot
Task: Transform input into {{TARGET_STYLE}}.
Examples:
Input: "..." → Output: "..."
Input: "..." → Output: "..."
Now transform:
Input: "{{NEW_INPUT}}"
Output:
CoT (concise rationale)
Before answering, list 3–5 assumptions (one line each).
Then present the solution with numbered steps and a final recommendation.
Keep rationale <= 120 words.
ReAct (reason + tool)
When facts are needed, use the web_search tool.
Explain (briefly) what you’re looking for and why; then update your plan from results.
Return a final answer + sources.
FAQ
Q1: Do I always need examples?
A: Not always, but Few-shot examples are the fastest way to steer style and labels. Start with 2–4 diverse examples. arXiv
Q2: Isn’t Chain-of-Thought hidden now?
A: Many providers restrict showing raw internal reasoning. Ask for concise, structured rationales (assumptions, key steps) to keep the benefits without exposing private chain-of-thought. OpenAI Help Center
Q3: When should I use ReAct?
A: When tasks need fresh facts or multi-step decisions. ReAct reduces hallucinations by alternating reasoning with tool use (search, APIs, calculators). arXiv
Q4: What about “prompt libraries” or “super prompts”?
A: They’re great starting points, but your context + test set matter more. Treat prompts like product code: version, test, iterate.
Paste this into your post’s header or a schema plugin for rich results.
<script type="application/ld+json">
{
"@context":"https://schema.org",
"@type":"FAQPage",
"mainEntity":[
{
"@type":"Question",
"name":"Do I always need examples?",
"acceptedAnswer":{"@type":"Answer","text":"Not always, but Few-shot examples quickly steer style and labels. Start with 2–4 diverse examples."}
},
{
"@type":"Question",
"name":"Is Chain-of-Thought visible?",
"acceptedAnswer":{"@type":"Answer","text":"Many providers restrict raw internal reasoning. Ask for concise, structured rationales (assumptions, key steps)."}
},
{
"@type":"Question",
"name":"When should I use ReAct?",
"acceptedAnswer":{"@type":"Answer","text":"Use ReAct when tasks need fresh facts or multi-step decisions; alternate reasoning with tool use to reduce hallucinations."}
}
]
}
</script>
Further Reading (Sources)
- OpenAI – Prompt engineering best practices (clear roles, constraints, examples, evaluation). OpenAI Help Center+1
- Azure OpenAI – Prompt engineering techniques (concise guidance and patterns). Microsoft Learn
- Few-shot learning: Language Models are Few-Shot Learners (Brown et al., NeurIPS 2020). Here
- Chain-of-Thought: Chain-of-Thought Prompting Elicits Reasoning in LLMs (Wei et al., 2022). Here
- ReAct: ReAct: Synergizing Reasoning and Acting in LLMs + Google Research overview. Here

