A Systematic Survey of Automatic Prompt Optimization Techniques

In their comprehensive survey paper, Ramnath et al. provide a systematic review of Automatic Prompt Optimization (APO) methods for large language models. The authors present:

  1. A formal definition of APO and a unifying 5-part framework for categorizing techniques
  2. A thorough analysis of the current landscape of APO methods, including:
  3. Prompt Initialization: How initial prompts are created or selected
  4. Evaluation Mechanisms: Methods for assessing prompt quality (LLM-based, metric-based, human feedback)
  5. Candidate Prompt Generation: Techniques for creating new prompt candidates
  6. Filtering Strategies: Approaches to select the most promising prompts
  7. Termination Criteria: When to stop the optimization process

The survey highlights several key insights:

This survey provides an excellent reference for understanding the state-of-the-art in automatic prompt optimization and identifies promising directions for future research.

AutoPrompt [5] combines the original prompt input with a set of shared (across all input data) "trigger tokens" that are selected via a gradient-based search to improve performance.

Share link! 📋
Link copied!
See the main site!