How to Get Good Output from ChatGPT: Few-Shot Prompts

Updated: June 22, 2024

Getting good answers from GPT is a matter of consistency and providing good examples for the model. This is a guide to the prompting technique known as few-shot prompting or few-shot learning.

Few-shot is a fancy term for training a model with your prompt by giving it examples. Most people figure this out naturally, and I’ve seen it given as advice in many ways: “be sure to give it details” or “paste in your product info,” etc. This technique alone can get you a decent way toward “human-like” output, or it can be used to give you more consistent output of any kind. All you are doing is teaching ChatGPT what a good answer looks like.

Say you want ChatGPT to a write a blog article, but it needs to blend in to your site. Give ChatGPT samples of existing articles you’ve written, at least 100 words, or if you’re don’t care about tokens, you can give it whole articles if they fit in the context limits. Include the prompt you would normally use and at least one, preferably two, examples.

Context window lengths have grown considerably over the last year. GPT-3.5-Turbo has a 16k token context window and GPT-4o can handle 128k! Two is a sorry number of examples for solid multi-shot prompting. The more the merrier. If you can afford the tokens I would suggest at least 5 examples in your input for in-context learning, and if they are short, a few dozen isn’t out of the question.


User: Write me a 700 word how-to guide on X
Assistant: -- Your pre-existing writing sample or article 1 --

User: Write me a 700 word how-to guide on Y
Assistant: -- Your pre-existing writing sample or article 2 --

Write me a 700 word how-to guide on Z


In the above example that whole thing is your prompt. It would contain two examples of your desired style. It shows the model how to respond. The final request (Z) is your actual prompt — the one you want answered.

You’ll get an article about Z. From then on in the conversation give your prompt and the new topic and it will know the drill. If there is a clear format and voice to your examples, it will attempt to emulate it. The more examples you give and the more consistent the writing style and format of the samples you use the better your results will be.

This is useful for getting formatted outputs too. e.g.:
(use real product examples)


Product Format Example #1
Item: X
Desc: X
Sizes: X, X, X
Price: $X

Product Format Example #2
Item: X
Desc: X
Sizes: X, X, X
Price: $X

Put these products into the format used in the examples above:
-- bunch of messy unstructured product info --


That’s all your prompt. You get the idea. Like I said, you’re probably already doing this in some form. Now do it more consciously and intentionally, and craft good examples for GPT, and you will get better output. The prompts can be anything, the key is consistency and good examples — preferably at least two, to help it establish a pattern.

Few-shot prompting can be especially important if you are using open or uncensored llms for writing. If you are trying to establish a particular writing style there is no substitute for feeding the model multiple idealized examples in your prompt. There are many ways to make language model output unique, but to make a model sound like YOU takes input from you.

(Note: This is a version of a guide I have previously posted elsewhere – D.D.)