Home/Guides/Few-Shot Prompting: Examples That Work
Prompt Engineering Basics

Few-Shot Prompting: Examples That Work

Learn how including a few examples in your AI prompt dramatically improves output quality and consistency.

8 min read

When you show the model what you want instead of just describing it, everything changes. Few-shot prompting — embedding examples directly in your prompt — is the most reliable way to get AI to produce output in a specific format, style, or structure that you can't easily describe in words. A single well-chosen example often does more than a paragraph of instructions.

What Few-Shot Prompting Is

Few-shot prompting involves providing one to five input-output example pairs in your prompt before presenting the actual task. The model identifies the pattern from your examples and applies that same pattern to the new input. The 'few' refers to the number of examples — as opposed to zero-shot (no examples) or fine-tuning (hundreds or thousands of examples baked into the model's weights). Few-shot is a middle ground that's available to anyone at runtime without any additional training: you're teaching the model the pattern you want by showing rather than telling.

When Few-Shot Dramatically Outperforms Zero-Shot

Few-shot shines when the desired output has a specific structure, style, or format that's hard to describe in words. Classification tasks with unusual categories, writing in a very particular tone that matches existing content, data extraction in a specific schema, scoring or rating with precise criteria — all of these improve dramatically with examples. The key indicator that you need few-shot is when zero-shot with good instructions keeps producing technically correct but structurally wrong output. Once you've seen that pattern twice, add an example instead of trying to describe the structure differently.

How to Write Good Examples

Few-shot examples need to be representative of the actual inputs you'll provide and demonstrate the exact output quality and format you want. Bad examples teach bad habits — inconsistent format, inappropriate length, wrong level of detail — and the model will faithfully reproduce those flaws. Each example should mirror a real input you'd expect to encounter and show the ideal response you'd want to receive. Use your three best historical outputs as examples if you have them. If you're starting from scratch, write the ideal outputs manually for 2-3 representative cases and use those.

How Many Examples Do You Need?

Two to three high-quality examples reliably outperform five mediocre ones. More examples use more tokens, increasing cost and latency, so the ROI of additional examples drops off quickly. For most tasks, start with one example; if the model still isn't calibrated correctly, add a second. Rarely do you need more than three. The exception is tasks where outputs need to be highly varied and the examples might cause the model to over-pattern on specific surface features — in that case, a more varied set of 4-5 examples reduces over-fitting to the examples themselves.

Structuring Few-Shot Examples in Your Prompt

The clearest way to structure few-shot examples is with explicit Input/Output labels: 'Input: [example input] Output: [example output]'. This makes the pattern unambiguous for the model. After your examples, present the actual task with 'Input: [real input] Output:' and let the model complete it. Some practitioners use Q/A, User/Assistant, or custom labels — any consistent structure works as long as it clearly separates inputs from outputs and real cases from examples. Consistency between example labels and real task labels is important: don't use 'Question' in examples and 'Input' for the real case.

Few-Shot for Style Matching

One of the most powerful applications of few-shot prompting is matching a specific writing style or brand voice. Instead of trying to describe the style ('write in a warm, direct, slightly informal tone with short sentences and no jargon'), paste in 2-3 examples of existing content you want to match and say 'write a new item in the same style as these examples.' The model is remarkably good at capturing style from examples — often better than it is at interpreting style descriptions, because style is easier to show than to define.

Prompt examples

✗ Weak prompt
Classify these support tickets as urgent or normal.

No criteria for what makes something urgent versus normal. The model will guess — and its guess may not match your team's classification logic.

✓ Strong prompt
Classify each support ticket as URGENT or NORMAL. Here are examples:
Input: 'I can't log in and I have a client demo in 2 hours'
Output: URGENT
Input: 'Can you explain how the export feature works?'
Output: NORMAL
Input: 'My payment failed three times and my account is locked'
Output: URGENT
Now classify: Input: '[ticket text]'
Output:

Three examples establish a clear pattern — urgency relates to time pressure, blocked access, and payment issues. The model now applies this specific logic rather than a general notion of urgency.

Practical tips

  • Use 2-3 high-quality examples rather than 5+ mediocre ones — fewer, better examples win every time.
  • Match examples to the distribution of real inputs: if 80% of your tasks are one type, weight your examples accordingly.
  • Label your examples clearly with Input/Output or similar consistent delimiters.
  • For style matching, paste in real content you want to match rather than trying to describe the style in words.
  • Review your examples as carefully as you'd review the final output — bad examples reliably produce bad outputs.

Continue learning

Zero-Shot Prompting ExplainedChain-of-Thought PromptingIterative Prompting

PromptIt builds few-shot examples automatically when your task needs them — no manual example construction required.

PromptIt applies these prompt engineering principles automatically to build better prompts for your specific task.

✦ Try it free

More Prompt Engineering Basics guides

What is Prompt Engineering?

Learn what prompt engineering is and why it matters for getting better

9 min · Read →

How to Use Role in AI Prompts

Discover how assigning a role to an AI model shapes its tone, expertis

8 min · Read →

How to Add Context to AI Prompts

Learn how providing background context in your prompts leads to more a

8 min · Read →

Defining the Task in Your AI Prompt

Find out how clearly stating the task in your prompt is the single big

8 min · Read →
← Browse all guides