Aussie AI Blog
Promising LLM Inference Optimization Research
-
September 29, 2025
-
by David Spuler, Ph.D.
Promising LLM Inference Optimization Research
There are literally 500 ways to do LLM inference optimization. Here are some of the more interesting optimizations in recent research papers that warrant further attention.
Fused KV Caching
Fused KV caching, also called substring, concatenated or position-independent KV caching, is about merging the KV caches of two adjacent pieces of text. This is a generalization of prefix KV caching. It recently got a boost from Meta's research in Lin et. al. (2025), which created a related technique based on modifications to the attention pattern for adjacent pieces of text, and their caches. See Fused KV caching research
FFN Fusion
NVIDIA researchers found a way to merge two or more FFN components, in Bercovich et al. (2025). The method is an enhancement to "attention head pruning" because if you remove an attention head, then the FFNs across layers are adjacent and can be merged. See FFN Fusion
FFN MatMul Merging
An FFN does two matrix multiplications with an intervening activation function (e.g., GELU). Interestingly, you could merge both MatMuls in every FFN into a single operation if not for that pesky activation function in the middle. Recent research by Hu et. al. (2025), shows that it's possible to do so by using linear approximations for the activation function. See Merging FFN MatMuls
References
- Xiaoqiang Lin, Aritra Ghosh, Bryan Kian Hsiang Low, Anshumali Shrivastava, Vijai Mohan, 1 Sep 2025, REFRAG: Rethinking RAG based Decoding, https://www.arxiv.org/abs/2509.01092 https://www.alphaxiv.org/pdf/2509.01092 (Separates the attention computations across RAG chunks, which is effectively the same as "fused KV" or "concatenated KV" approaches with pre-computed per-chunk KV caches.)
- Akhiad Bercovich, Mohammad Dabbah, Omri Puny, Ido Galil, Amnon Geifman, Yonatan Geifman, Izhak Golan, Ehud Karpas, Itay Levy, Zach Moshe, Najeeb Nabwani, Tomer Ronen, Itamar Schen, Elad Segal, Ido Shahaf, Oren Tropp, Ran Zilberstein, Ran El-Yaniv, 24 Mar 2025, FFN Fusion: Rethinking Sequential Computation in Large Language Models, https://arxiv.org/abs/2503.18908
- Gansen Hu, Zhaoguo Wang, Jinglin Wei, Wei Huang, Haibo Chen, 17 Jan 2025, Accelerating Large Language Models through Partially Linear Feed-Forward Network, https://arxiv.org/abs/2501.10054 (Inspired by constant folding, the optimization is merging the two MatMuls in an FFN by approximating the itervening non-linear activation function (e.g., RELU or GELU), with linear functions and merging the two matrices using matrix-multiplication associativity.)
More AI Research Topics
Read more about:
Aussie AI Advanced C++ Coding Books
![]() |
C++ AVX Optimization: CPU SIMD Vectorization:
Get your copy from Amazon: C++ AVX Optimization: CPU SIMD Vectorization |
![]() |
C++ Ultra-Low Latency: Multithreading and Low-Level Optimizations:
Get your copy from Amazon: C++ Ultra-Low Latency |
![]() |
Advanced C++ Memory Techniques: Efficiency & Safety:
Get your copy from Amazon: Advanced C++ Memory Techniques |
![]() |
Safe C++: Fixing Memory Safety Issues:
Get it from Amazon: Safe C++: Fixing Memory Safety Issues |
![]() |
Efficient C++ Multithreading: Modern Concurrency Optimization:
Get your copy from Amazon: Efficient C++ Multithreading |
![]() |
Efficient Modern C++ Data Structures:
Get your copy from Amazon: Efficient C++ Data Structures |
![]() |
Low Latency C++: Multithreading and Hotpath Optimizations: advanced coding book:
Get your copy from Amazon: Low Latency C++ |
![]() |
CUDA C++ Optimization book:
Get your copy from Amazon: CUDA C++ Optimization |
![]() |
CUDA C++ Debugging book:
Get your copy from Amazon: CUDA C++ Debugging |