Aussie AI

Input Compression

  • Last Updated 29 August, 2025
  • by David Spuler, Ph.D.

What is Input Compression?

Input compression is an LLM inference optimization that reduces an input text sequence to a shorter sequence of text or tokens for the LLM to process faster. LLM inference efficiency is improved if there are fewer tokens for the LLM to process. For details on the various specific sub-techniques of compressing input text or tokens, see also:

Research on Input Compression

Some of the general papers on token compression strategies:

  • Wangbo Zhao, Jiasheng Tang, Yizeng Han, Yibing Song, Kai Wang, Gao Huang, Fan Wang, Yang You, 18 Mar 2024, Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation, https://arxiv.org/abs/2403.11808 (PEFT and adaptive inference and token pruning in Vision Transformers.)
  • Yuzhang Shang, Mu Cai, Bingxin Xu, Yong Jae Lee, Yan Yan, 25 Mar 2024, LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models, https://arxiv.org/abs/2403.15388 Code: https://llava-prumerge.github.io/ (Compresses input images based on redundant sections.)
  • Maxim Bonnaerens, Nov 2023, Resource-Efficient Deep Learning for Computer Vision, Ph.D. thesis, Ghent University, https://biblio.ugent.be/publication/01HEMGWENRT8C255N2RD9KAEJC/file/01HEMGZ9JYP8NXPSQJZM14ACT9 (Examines various vision Transformer optimizations including a NAS approached based on building blocks and also combined token pruning/merging for input compression.)
  • Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, Qianyu Chen, Huarong Zhou, Zhensheng Zou, Haoye Zhang, Shengding Hu, Zhi Zheng, Jie Zhou, Jie Cai, Xu Han, Guoyang Zeng, Dahai Li, Zhiyuan Liu, Maosong Sun, 3 Aug 2024, MiniCPM-V: A GPT-4V Level MLLM on Your Phone, https://arxiv.org/abs/2408.01800 Code: https://github.com/OpenBMB/MiniCPM-V
  • Wei Chen, Zhiyuan Li, Shuo Xin, Yihao Wang, 28 Aug 2024, Dolphin: Long Context as a New Modality for Energy-Efficient On-Device Language Models, https://arxiv.org/abs/2408.15518 https://huggingface.co/NexaAIDev/Dolphin (Using vision transformer architecture to process longer text.)
  • Bosheng Qin, Juncheng Li, Siliang Tang, Yueting Zhuang, 24 Nov 2022, DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention, https://arxiv.org/abs/2211.16368
  • Keda Tao, Can Qin, Haoxuan You, Yang Sui, Huan Wang, 22 Nov 2024, DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models, https://arxiv.org/abs/2411.15024
  • M Xu, D Cai, W Yin, S Wang, X Jin, X Liu - ACM Computing Surveys, 2024, Resource-efficient Algorithms and Systems of Foundation Models: A Survey, https://dl.acm.org/doi/pdf/10.1145/3706418
  • Zhijian Liu, Ligeng Zhu, Baifeng Shi, Zhuoyang Zhang, Yuming Lou, Shang Yang, Haocheng Xi, Shiyi Cao, Yuxian Gu, Dacheng Li, Xiuyu Li, Yunhao Fang, Yukang Chen, Cheng-Yu Hsieh, De-An Huang, An-Chieh Cheng, Vishwesh Nath, Jinyi Hu, Sifei Liu, Ranjay Krishna, Daguang Xu, Xiaolong Wang, Pavlo Molchanov, Jan Kautz, Hongxu Yin, Song Han, Yao Lu, 5 Dec 2024, NVILA: Efficient Frontier Visual Language Models, https://arxiv.org/abs/2412.04468
  • Hao Li, Changyao Tian, Jie Shao, Xizhou Zhu, Zhaokai Wang, Jinguo Zhu, Wenhan Dou, Xiaogang Wang, Hongsheng Li, Lewei Lu, Jifeng Dai, 12 Dec 2024, SynerGen-VL: Towards Synergistic Image Understanding and Generation with Vision Experts and Token Folding, https://arxiv.org/abs/2412.09604 (Reducing the precision of input images via token folding, with a special decoder at the end to ensure output is high precision.)
  • OpenAI, Dec 2024, OpenAI o1 and new tools for developers, https://openai.com/index/o1-and-new-tools-for-developers/ ("Lower latency: o1 uses on average 60% fewer reasoning tokens than o1-preview for a given request.")
  • Guoxuan Chen, Han Shi, Jiawei Li, Yihang Gao, Xiaozhe Ren, Yimeng Chen, Xin Jiang, Zhenguo Li, Weiyang Liu, Chao Huang, 17 Dec 2024 (v2), SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator, https://arxiv.org/abs/2412.12094 http://sepllm.github.io/
  • Tianqiao Liu, Zui Chen, Zitao Liu, Mi Tian, Weiqi Luo, 13 Sep 2024, Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding, https://arxiv.org/abs/2409.08561 (Compressing the interim token sequences in Chain-of-Thought.)
  • Yu Kang, Xianghui Sun, Liangyu Chen, Wei Zou, 16 Dec 2024, C3oT: Generating Shorter Chain-of-Thought without Compromising Effectiveness, https://arxiv.org/abs/2412.11664 (Token pruning and prompt compression for Chain-of-Thought.)
  • Haoran You, Connelly Barnes, Yuqian Zhou, Yan Kang, Zhenbang Du, Wei Zhou, Lingzhi Zhang, Yotam Nitzan, Xiaoyang Liu, Zhe Lin, Eli Shechtman, Sohrab Amirghodsi, Yingyan Celine Lin, 22 Dec 2024, Layer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers, https://arxiv.org/abs/2412.16822
  • Fabio Montello, Ronja Güldenring, Simone Scardapane, Lazaros Nalpantidis, 13 Jan 2025, A Survey on Dynamic Neural Networks: from Computer Vision to Multi-modal Sensor Fusion, https://arxiv.org/abs/2501.07451 (Survey of adaptive inference optimizations: early exit, dynamic routing, token skimming.)
  • J Köpke, A Safan, 2024, Efficient llm-based conversational process modeling, Business Process Management Workshops, https://isys.uni-klu.ac.at/PDF/BPM_2024_paper_1442.pdf (Examines and improves the token costs of prompt strategies in conversational sessions.)
  • Jingyang Yuan, Huazuo Gao, Damai Dai, Junyu Luo, Liang Zhao, Zhengyan Zhang, Zhenda Xie, Y. X. Wei, Lean Wang, Zhiping Xiao, Yuqing Wang, Chong Ruan, Ming Zhang, Wenfeng Liang, Wangding Zeng, 16 Feb 2025, Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention, https://arxiv.org/abs/2502.11089
  • Xiaoran Liu, Ruixiao Li, Mianqiu Huang, Zhigeng Liu, Yuerong Song, Qipeng Guo, Siyang He, Qiqi Wang, Linlin Li, Qun Liu, Yaqian Zhou, Xuanjing Huang, Xipeng Qiu, 24 Feb 2025, Thus Spake Long-Context Large Language Model, https://arxiv.org/abs/2502.17129 (Impressive survey of many techniques to improve efficiency and accuracy of long context processing in both inference and training, covering text, video and multimodal models.)
  • Kele Shao, Keda Tao, Kejia Zhang, Sicheng Feng, Mu Cai, Yuzhang Shang, Haoxuan You, Can Qin, Yang Sui, Huan Wang, 30 Jul 2025 (v3), When Tokens Talk Too Much: A Survey of Multimodal Long-Context Token Compression across Images, Videos, and Audios, https://arxiv.org/abs/2507.20198 https://github.com/cokeshao/Awesome-Multimodal-Token-Compression
  • Naderdel Piero, Zacharias Cromwell, Nathaniel Wainwright, Matthias Nethercott, 8 Aug 2025, Contextual Reinforcement in Multimodal Token Compression for Large Language Models, https://arxiv.org/abs/2501.16658
  • John Harvill, Ziwei Fan, Hao Wang, Luke Huan, Anoop Deoras, Yizhou Sun, Hao Ding, 20 Aug 2025, Lossless Token Sequence Compression via Meta-Tokens, https://arxiv.org/abs/2506.00307

AI Books from Aussie AI



The Sweetest Lesson: Your Brain Versus AI The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
  • Your brain is 50 times bigger than the best AI engines.
  • Truly intelligent AI will require more compute!
  • Another case of the bitter lesson?
  • Maybe it's the opposite of that: the sweetest lesson.

Get your copy from Amazon: The Sweetest Lesson



RAG Optimization RAG Optimization: Accurate and Efficient LLM Applications: new book on RAG architectures:
  • Smarter RAG
  • Faster RAG
  • Cheaper RAG
  • Agentic RAG
  • RAG reasoning

Get your copy from Amazon: RAG Optimization



Generative AI in C++ Generative AI Applications book:
  • Deciding on your AI project
  • Planning for success and safety
  • Designs and LLM architectures
  • Expediting development
  • Implementation and deployment

Get your copy from Amazon: Generative AI Applications



Generative AI in C++ Generative AI programming book:
  • Generative AI coding in C++
  • Transformer engine speedups
  • LLM models
  • Phone and desktop AI
  • Code examples
  • Research citations

Get your copy from Amazon: Generative AI in C++



CUDA C++ Optimization CUDA C++ Optimization book:
  • Faster CUDA C++ kernels
  • Optimization tools & techniques
  • Compute optimization
  • Memory optimization

Get your copy from Amazon: CUDA C++ Optimization



CUDA C++ Optimization CUDA C++ Debugging book:
  • Debugging CUDA C++ kernels
  • Tools & techniques
  • Self-testing & reliability
  • Common GPU kernel bugs

Get your copy from Amazon: CUDA C++ Debugging

More AI Research

Read more about: