A Systematic Survey of Automatic Prompt Optimization Techniques
In their comprehensive survey paper, Ramnath et al. provide a systematic review of Automatic Prompt Optimization (APO) methods for large language models. The authors present:
- A formal definition of APO and a unifying 5-part framework for categorizing techniques
- A thorough analysis of the current landscape of APO methods, including:
- Prompt Initialization: How initial prompts are created or selected
- Evaluation Mechanisms: Methods for assessing prompt quality (LLM-based, metric-based, human feedback)
- Candidate Prompt Generation: Techniques for creating new prompt candidates
- Filtering Strategies: Approaches to select the most promising prompts
- Termination Criteria: When to stop the optimization process
The survey highlights several key insights:
- APO methods can significantly improve LLM performance across various tasks without requiring model parameter access
- Different optimization strategies (evolutionary algorithms, gradient-based methods, etc.) offer different trade-offs
- Human feedback integration remains important for certain applications
- The field is rapidly evolving with new techniques emerging regularly
This survey provides an excellent reference for understanding the state-of-the-art in automatic prompt optimization and identifies promising directions for future research.
AutoPrompt [5] combines the original prompt input with a set of shared (across all input data) "trigger tokens" that are selected via a gradient-based search to improve performance.