Aussie AI

Multi-head Latent Attention (MLA)

  • Last Updated 29 August, 2025
  • by David Spuler, Ph.D.

What is Multi-head Latent Attention (MLA)?

Multi-head Latent Attention (MLA) is an LLM attention optimization developed by DeepSeek. It became well-known with the release of DeepSeek R1 reasoning model in early 2025, but had actually been developed earlier for their V2/V3 non-reasoning models in mid-late 2024.

MLA improves upon the well-known LLM attention optimizations such as Multi-Head Attention (MHA) in the original Transformer paper, and the follow-on advancements of Multi-Query Attention (MQA) and and Group Query Attention (GQA). Subsequently, DeepSeek has also released as open-source the code for a combination of MLA and Flash Attention called "FlashMLA."

Research on MLA

Research papers on MLA include:

AI Books from Aussie AI



The Sweetest Lesson: Your Brain Versus AI The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
  • Your brain is 50 times bigger than the best AI engines.
  • Truly intelligent AI will require more compute!
  • Another case of the bitter lesson?
  • Maybe it's the opposite of that: the sweetest lesson.

Get your copy from Amazon: The Sweetest Lesson



RAG Optimization RAG Optimization: Accurate and Efficient LLM Applications: new book on RAG architectures:
  • Smarter RAG
  • Faster RAG
  • Cheaper RAG
  • Agentic RAG
  • RAG reasoning

Get your copy from Amazon: RAG Optimization



Generative AI in C++ Generative AI Applications book:
  • Deciding on your AI project
  • Planning for success and safety
  • Designs and LLM architectures
  • Expediting development
  • Implementation and deployment

Get your copy from Amazon: Generative AI Applications



Generative AI in C++ Generative AI programming book:
  • Generative AI coding in C++
  • Transformer engine speedups
  • LLM models
  • Phone and desktop AI
  • Code examples
  • Research citations

Get your copy from Amazon: Generative AI in C++



CUDA C++ Optimization CUDA C++ Optimization book:
  • Faster CUDA C++ kernels
  • Optimization tools & techniques
  • Compute optimization
  • Memory optimization

Get your copy from Amazon: CUDA C++ Optimization



CUDA C++ Optimization CUDA C++ Debugging book:
  • Debugging CUDA C++ kernels
  • Tools & techniques
  • Self-testing & reliability
  • Common GPU kernel bugs

Get your copy from Amazon: CUDA C++ Debugging

More AI Research

Read more about: