Few-Shot Prompting: How to Make LLMs Smarter with Examples

Anil's - Prompting examples (1)

Large Language Models (LLMs) can do a lot — they can summarize text, classify things, pull out details, or even write code. But sometimes their answers are… messy.

For example, if you ask the model to tell you the sentiment of a review, you might just want:

positive

Instead, it replies with:

“This review is very positive because the person expresses strong satisfaction…”

Not wrong, but way more than you asked for — and not very developer-friendly.

This is where few-shot prompting comes in. By giving the model a couple of clear examples, we can guide it to produce exactly the kind of output we want — more accurate, more consistent, and far easier to use in real-world applications.

What is Few-Shot Prompting?

At its core, few-shot prompting is simple:

  • You show the model a few input-output examples.
  • Then you give it a new input.
  • Because of the examples, the model follows the same style and format in its response.

It’s like showing a junior developer how you’d like them to structure their code — once they’ve seen a few samples, they tend to follow the pattern.

This idea became popular with OpenAI’s famous paper “Language Models are Few-Shot Learners”. Since then, it’s become one of the most practical tricks for developers working with LLMs.

Zero-Shot vs. Few-Shot: Why Examples Matter

Suppose we want to classify text into positive, negative, or neutral sentiment.

Zero-Shot Prompt

If we just ask the model directly

query = "Classify the sentiment: 'I love this product, it’s amazing!'"
print(llm.invoke(query).content)

We might get something like

“This review is very positive because the person expresses strong satisfaction…”

Okay… but we only wanted the word

positive

Few-Shot Prompt

Now let’s add a few examples into the prompt

from langchain.prompts import ChatPromptTemplate 

from langchain_openai import ChatOpenAI 

 

llm = ChatOpenAI(model="gpt-4o-mini")

# Step 1: Examples

examples = [ 

    ("human", "Text: The movie was boring and too long"), 

    ("ai", "negative"), 

    ("human", "Text: What a fantastic experience, I’d recommend it!"), 

    ("ai", "positive"), 

    ("human", "Text: It was okay, not great but not terrible"), 

    ("ai", "neutral"), 

]

# Step 2: Build the chat template

final_prompt = ChatPromptTemplate.from_messages( 

    [ 

        ("system", "You are a sentiment analysis classifier. Always respond with 'positive', 'negative', or 'neutral'."), 

        *examples, 

        ("human", "Text: {query}") 

    ] 

)

# Step 3: Run the model

messages = final_prompt.format_messages(query="I love this product, it’s amazing!") 

response = llm.invoke(messages) 

 

print(response.content)

This time the model responds with

positive

Clean, predictable, and exactly what we need for downstream code.

Another Example: JSON Output Enforcement

Zero-Shot

query = "Extract name and age: 'My name is Alice and I am 27 years old.'" 

response = llm.invoke(query) 

print(response.content)

Might give

The person’s name is Alice and they are 27 years old.

Few-Shot with JSON

from langchain.prompts import PromptTemplate 

from langchain.prompts.few_shot import FewShotChatMessagePromptTemplate 

 

examples = [ 

    {"input": "My name is Bob and I am 34", "output": '{"name": "Bob", "age": 34}'}, 

    {"input": "I am Sarah, 29 years old", "output": '{"name": "Sarah", "age": 29}'} 

] 

 

example_prompt = PromptTemplate.from_template("Input: {input}\nOutput: {output}") 

few_shot = FewShotChatMessagePromptTemplate(example_prompt=example_prompt, examples=examples) 

 

final_prompt = ChatPromptTemplate.from_messages( 

    [ 

        ("system", "You are an information extractor. Always output valid JSON."), 

        few_shot, 

        ("human", "Input: My name is Alice and I am 27 years old.") 

    ] 

) 

 

response = llm.invoke(final_prompt.format_messages()) 

print(response.content)

Output:

{"name": "Alice", "age": 27}

Why This Matters

Few-shot prompting is one of those “small effort, big reward” techniques. With just a few examples, you can:

  • Turn messy, verbose outputs into clean structured ones.
  • Make classifications consistent (no more guesswork).
  • Enforce strict formats for downstream systems.
  • Reduce debugging time by avoiding edge-case inconsistencies.

Whether you’re doing sentiment analysis, information extraction, or even style transfer, few-shot prompting gives you an easy way to steer the model — without retraining or fine-tuning.

Final Thoughts

Few-shot prompting won’t make your model magically smarter, but it will make its outputs cleaner, more predictable, and production-ready.

Next time your LLM feels too chatty or inconsistent, try adding a couple of examples — it’s one of the simplest tricks with the biggest payoff.