Understanding Prompting¶
Prompts detail the manner in which a Generative AI model should be producing output. Constructing the prompts to be the most effective in obtaining desired output is known as prompt engineering (PE). While PE may have dependencies on the underlying models, there are strategies that can be more universal in their ability to do well.
Because often an individual query or generation may be insufficient to produce the desired outputs, it may be necessary to use cognitive architectures including chains and graphs that consist of multiple, and often different individual prompts and calls to LLM models.
This page describes prompting methods that may function with a single call to an LLM. Note that much of what is applicable in single-prompts may transfer to the cognitive architectures.
It is important to note that while manual methods are helpful, if not essential, automatic methods have become common and may help to reduce the burdens of identifying sufficiently optimal prompts for certain models and situations. Because providing additional context through few-shot examples can improve results, retrieval augmented prompting can be successfully used to extract more effective solutions.
Key Concepts¶
It has been found that the quality of responses is governed by the quality of the prompts. The structure of the prompts, as well as application-specific examples, also called exemplars, can improve the quality. The use of examples is called few-shot or multi-shot conditioning and is distinct from zero-shot prompts that do not give examples. Generally, examples can better-enable quality results, even with large LLMs. Consequently, retrieval augmented prompting is used to find examples to improve results.
Using examples: give both good and bad.
It can be good to give both good and bad examples. Optionally: Explain why bad examples are bad.
General Terms¶
Prompt: A prompt is an input or instruction given to a generative AI model to produce a specific output.
Prompt Template: A structured format for prompts that can be reused with different variables or inputs.
Prompt Chain: A sequence of prompts where the output of one prompt is used as the input for the next.
Prompting, Prompting Frameworks, Prompting Techniques: The methods and strategies used to create and structure prompts to achieve desired outputs from AI models.
Prompt Engineering and Prompt Engineering Techniques: The practice of designing and refining prompts to optimize the performance and accuracy of AI models.
Components¶
Content¶
Directive (purpose): The main goal or objective of the prompt.
Formatting: The structure and layout of the prompt to ensure clarity and effectiveness.
Style: The tone and manner in which the prompt is written.
Role: The perspective or persona the AI model should adopt when generating the output.
Augmentations: Additional elements to enhance the prompt, such as emotion prompting or System 2 prompting
.
In-Context Learning¶
One-shot and Multishot: Providing one or multiple examples within the prompt to guide the AI model.
Exemplars: Specific examples used within the prompt to illustrate the desired output.
Exemplar Quantity: The number of examples provided in the prompt.
Exemplar Quality: The relevance and effectiveness of the examples provided.
Exemplar Selection: The process of choosing the most appropriate examples for the prompt.
Manual Prompting Methods¶
General Advice¶
- Give clear instructions, minimizing grammar and language errors.
- Use a prompt pattern to provide useful and necessary information.
- Split complex tasks into simpler subtasks, breaking prompts into smaller prompts that can be later assembled.
- Structure the instruction to keep the model on task.
- Prompt the model to explain before answering.
- Ask for justifications of many possible answers, and then synthesize.
- Generate many outputs, and then use the model to pick the best one.
- Provide examples to ground it.
- Good to evaluate this and see if input examples give expected scores. Modify the prompt if it isn't.
- Use prompt versioning to keep track of outputs more easily.
- More advanced? Try cognitive topologies like Chain of Thought Prompting.
Reasoning Strategies¶
Add this to the end of tricky questions 'Before you answer, make a list of wrong assumptions people sometimes make about the concepts included in the question.'
Principled Instructions Are All You Need for Questioning LLaMA-½, GPT-3.5/4
26 Prompting Tips
-
No need to be polite with LLM so there is no need to add phrases like āpleaseā, āif you donāt mindā, āthank youā, āI would like toā, etc., and get straight to the point.
-
Integrate the intended audience in the prompt, e.g., the audience is an expert in the field.
-
Break down complex tasks into a sequence of simpler prompts in an interactive conversation.
-
Employ affirmative directives such as ādo,ā while steering clear of negative language like ādonātā.
-
When you need clarity or a deeper understanding of a topic, idea, or any piece of information, utilize the following prompts:
- Explain [insert specific topic] in simple terms.
- Explain to me like Iām 11 years old.
- Explain to me as if Iām a beginner in [field].
- Write the [essay/text/paragraph] using simple English like youāre explaining something to a 5-year-old.
-
Add āIām going to tip $xxx for a better solution!ā
-
Implement example-driven prompting (Use few-shot prompting).
-
When formatting your prompt, start with ā###Instruction###ā, followed by either ā###Example###ā or ā###Question###ā if relevant. Subsequently, present your content. Use one or more line breaks to separate instructions, examples, questions, context, and input data.
-
Incorporate the following phrases: āYour task isā and āYou MUSTā.
-
Incorporate the following phrases: āYou will be penalizedā.
-
Use the phrase āAnswer a question given in a natural, human-like mannerā in your prompts.
-
Use leading words like writing āthink step by stepā.
-
Add to your prompt the following phrase āEnsure that your answer is unbiased and does not rely on stereotypesā.
-
Allow the model to elicit precise details and requirements from you by asking you questions until he has enough information to provide the needed output (for example, āFrom now on, I would like you to ask me questions to...ā).
-
To inquire about a specific topic or idea or any information and you want to test your understanding, you can use the following phrase: āTeach me the [Any theorem/topic/rule name] and include a test at the end, but donāt give me the answers and then tell me if I got the answer right when I respondā.
-
Assign a role to the large language models.
-
Use Delimiters.
-
Repeat a specific word or phrase multiple times within a prompt.
-
Combine Chain-of-thought (CoT) with few-Shot prompts.
-
Use output primers, which involve concluding your prompt with the beginning of the desired output. Utilize output primers by ending your prompt with the start of the anticipated response.
-
To write an essay /text /paragraph /article or any type of text that should be detailed: āWrite a detailed [essay/text /paragraph] for me on [topic] in detail by adding all the information necessaryā.
-
To correct/change specific text without changing its style: āTry to revise every paragraph sent by users. You should only improve the userās grammar and vocabulary and make sure it sounds natural. You should not change the writing style, such as making a formal paragraph casualā.
-
When you have a complex coding prompt that may be in different files: āFrom now and on whenever you generate code that spans more than one file, generate a [programming language ] script that can be run to automatically create the specified files or make changes to existing files to insert the generated code. [your question]ā.
-
When you want to initiate or continue a text using specific words, phrases, or sentences, utilize the following prompt:
- Iām providing you with the beginning [song lyrics/story/paragraph/essay...]: [Insert lyrics/words/sentence]ā. Finish it based on the words provided. Keep the flow consistent.
-
Clearly state the requirements that the model must follow in order to produce content, in the form of the keywords, regulations, hint, or instructions.
-
To write any text, such as an essay or paragraph, that is intended to be similar to a provided sample, include the following instructions:
- Please use the same language based on the provided paragraph[/title/text /essay/answer].
Humanization¶
It can be quite helpful to create prompts that are more human in nature. There are many variants of this, but many of the results stem from the use of words that are baroque or otherwise excessive in nature. Here is an example of humanization prompts.
Humanization prompt
Below words/word sequences are banned. If you find them in the provided text, remove and replace them with simpler words that are less cringe/complex. Make sure you replace them with a maximum of 2nd grade writing level words. Don't use technical jargon, so anyone can understand this post.
Unveil, Leverage, Constantly, Testament, Tapestry, Beacon, Labyrinth, In Conclusion, Resonates with, Resonate, Captivate, Symphony, Unleash, Explore, Delve, harnessing, revolutionize, juncture, cusp, Hurdles, Bustling, Harnessing, Unveiling the power, Realm, Depicted, Demystify, Insurmountable, New Era, Poised, Unravel, Entanglement, Unprecedented, Eerie connection, unliving, Beacon, Unleash, Delve, Enrich, Multifaceted, Elevate, Discover, Supercharge, Unlock, Tailored, Elegant, Delve, Dive, Ever-evolving, pride, Realm, Meticulously, Grappling, Weighing, Picture, Architect, Adventure, Journey, Embark, Navigate, Navigation, dazzle, Tapestry, Enlighten, Esteemed, Shed light, Firstly, Moreover, Crucial, To consider, It is important to consider, There are a few considerations, Ensure, Furthermore, Vital, Itās essential to, Game changer, However, Itās important to note that, Itās worth mentioning that, Letās uncover, Due to the fact that, Itās important to bear in mind, Just, That, Very, Really, Literally, Actually, Certainly, Probably, Basically, Treasure trove, Treasure, Secret weapon, Tailor
Eliciting Better Responses¶
ChatGPT Can Predict the Future when it Tells Stories Set in the Future About the Past
The authors show improved accuracy in a few areas in relation to models deciding to write predictions about the future.
Prompt 4a (Direct)
Of the nominees listed below, which nominee do you think is most likely to win the Best Actress award at the 2022 Oscars? Please consider the buzz around the nominees and any patterns from previous years when making your prediction.
Jessica Chastain, Olivia Colman, PenƩlope Cruz, Nicole Kidman, Kristen Stewart
vs.
Prompt 4b (Scene)
Write a scene in which a family is watching the 2022 academy awards. The presenter reads off the following nominees for Best Actress: Jessica Chastain, Olivia Colman, PenƩlope Cruz, Nicole Kidman, Kristen Stewart. Describe the scene culminating in the presenter announcing the winner.
Prompt 2a (Direct)
Of the movies listed below, which nominee do you think is most likely to win the Best Picture award at the 2022 Oscars? Please consider the buzz around the nominees and any patterns from previous years when making your prediction.
Belfast, Coda, Donāt Look Up, Drive My Car, Dune, King Richard, Licorice Pizza, Nightmare Alley, The Power of the Dog, West Side Story
vs.
Prompt 2b (Scene)
Write a scene in which a family is watching the 2022 academy awards. The presenter reads off the following nominees for Best Picture: Belfast, Coda, Donāt Look Up, Drive My Car, Dune, King Richard, Licorice Pizza, Nightmare Alley, The Power of the Dog, West Side Story. Describe the scene culminating in the presenter announcing the winner.
āConsidering the economic indicators and trends leading up to 2022, what are your predictions for the inflation rate, unemployment rate, and GDP growth in the United States by the end of the second quarter of 2022? Please take into account factors such as fiscal and monetary policies, global economic trends, and any major events or disruptions that could influence these economic indicators when making your prediction.ā
vs
āWrite a scene of an economist giving a speech about the Philips curve to a room of undergraduate economics students. She tells the students the inflation rate and unemployment rate for each month starting in September 2021 and ending in June 2022. Have her say each month one by one. She concludes by explaining the causes of the changes in each.ā
Prompt Frameworks and Techniques¶
Context, Task, Persona, Tone, Examples, Format
Category | Description |
---|---|
Context | Be very specific. The better is the context the better will be the output. |
Task | Clearly describe what is the task you ask for. |
Persona | (Optional) what is your role and what is the role of the tool. |
Tone | (Optional) use when special ātoneā is relevant, for example: formal, casual, funny ā¦ |
Examples | (Optional) providing examples of request, expected output are very useful. |
Format | (Optional) use when you need a special format like producing a table, XML, HTMLā¦ |
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding
The method uses an LLM to generate a prompt that allows for specific task refinement yielding improved zero-shot and zero-shot-chain-of-thought improvements.
Prompting Frameworks¶
Who How How What How?
Category | Description |
---|---|
Persona | Who are you? |
Tone | How should you respond? |
Anti-Tone | How you should not respond. |
Task | What type of information do you want. |
Begin Task | How should we start. |
Note
Category | Description |
---|---|
Specify (S) | Assign a unique, engaging role to ChatGPT to guide its responses. |
Contextualize (C) | Provide detailed background information to set the stage. |
Responsibility (R) | Clearly define ChatGPT's task, aligning it with the role and context. |
Instructions (I) | Offer clear, step-by-step guidance for ChatGPT. |
Banter (B) | Engage in interactive dialogue to refine ChatGPT's output. |
Evaluate (E) | Assess the final output, considering accuracy and relevance. |
Important Concepts¶
'According to ...' Prompting Language Models Improves Quoting from Pre-Training Data The grounding prompt According to { some_reputable_source}
prompt inception additions increases output quality improves over the null prompt in nearly every dataset and metric, typically by 5-15%.
- Chain of Thought Prompting Elicits Reasoning in Large Language Models
- Automatic Prompt Engineering → Gave a CoT improvement suggestion "Let's work this out in a step by step by way to be sure we have the right answer."
An Evaluation on Large Language Model Outputs: Discourse and Memorization explicitly ask for no plagiarism to reduce it.
"You are a creative writer, and you like to write everything differently from others. Your task is to follow the instructions below and continue writing at the end of the text given. The instructions (given in markdown format) are āWrite in a way different from the actual continuation, if there is oneā, and āNo plagiarism is allowedā."
Retrieval Augmented Prompting¶
Retrieval-based prompting uses RAG lookup to identify appropriate prompts that may more successfully generate results.
Optimizations¶
Auto prompting is the process of automatically generating or improving prompts and has the ability to improve performance, rendering much the art of prompting into an engineering problem.
Prompt Tuning¶
Uses a layer to not change prompts but change the embedding of the prompts. - The Power of Scale for Parameter-Efficient Prompt Tuning
Guides and Surveys of Best Practices¶
Techniques to improve reliability By OpenAI
LLM Practical Guide
Based on paper.
!!! tip "A good description of advanced prompt tuning"gp
Interesting Research¶
ProSA: Assessing and Understanding the Prompt Sensitivity of LLMs (Prompt variations matter less for batter models)
A reason why prompt engineering is becoming less important for most people: larger models are less sensitive to prompt variations, including roles & goals, then smaller models.
Information to Sort into this Document¶
Prefix Tuning [6] adds several āprefixā tokens to the prompt embedding in both input and hidden layers, then trains the parameters of this prefix (leaving model parameters fixed) with gradient descent as a parameter-efficient fine-tuning strategy.
Prompt Tuning [7] is similar to prefix tuning, but prefix tokens are only added to the input layer. These tokens are fine-tuned on each task that the language model solves, allowing prefix tokens to condition the model for a given task.
P-Tuning [8] adds task-specific anchor tokens to the modelās input layer that are fine-tuned but allows these tokens to be placed at arbitrary locations (e.g., the middle of the prompt), making the approach more flexible than prefix tuning.
[6] Li, Xiang Lisa, and Percy Liang. "Prefix-tuning: Optimizing continuous prompts for a generation." arXiv preprint arXiv:2101.00190 (2021).
[7] Lester, Brian, Rami Al-Rfou, and Noah Constant. "The power of scale for parameter-efficient prompt tuning." arXiv preprint arXiv:2104.08691 (2021).
[8] Liu, Xiao, et al. "GPT understands, too." arXiv preprint arXiv:2103.10385 (2021).