Skip to content

Prompt compression

Prompt Compression

Prompt compression provides methods of compressing prompt inputs in such a way that it will yield equivalent results for downstream result generation.

GitHub Repo stars (Long)LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

Paper: LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression Paper: LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models The authors demonstrate the use of smaller language models to identify and remove non-essential tokens in prompts, enabling up to 20x compression with minimal performance loss. The method is designed to generate a compressed prompt from an original prompt. Using a budget controller to dynamically allocate compression ratios for different components prompts to maintain semantic integrity under high compression ratios.

image image

Pseudo Code image image