Aussie AI
Large Concept Models
-
Last Updated 8 August, 2025
-
by David Spuler, Ph.D.
What are Large Concept Models?
Large Concept Models are a new type of LLM architecture that does inference on concepts rather than tokens or words. Reasoning is done in a hierarchical manner throughout the concept tree for a document, rather than the traditional LLM approach of emitting tokens sequentially. Advantages include that long documents are much more efficiently processed, the model is better at conceptual reasoning, and multiple foreign languages can be handled because the model operates at a higher level than the words. For related approaches, see also concept tokens and reasoning tokens.
Research on Large Concept Models (LCMs)
Research papers include:
- LCM team, Loïc Barrault, Paul-Ambroise Duquenne, Maha Elbayad, Artyom Kozhevnikov, Belen Alastruey, Pierre Andrews, Mariano Coria, Guillaume Couairon, Marta R. Costa-jussà, David Dale, Hady Elsahar, Kevin Heffernan, João Maria Janeiro, Tuan Tran, Christophe Ropers, Eduardo Sánchez, Robin San Roman, Alexandre Mourachko, Safiyyah Saleem, Holger Schwenk, 15 Dec 2024 (v2), Large Concept Models: Language Modeling in a Sentence Representation Space, https://arxiv.org/abs/2412.08821 https://github.com/facebookresearch/large_concept_model (Model operates at the sentence concept level, using SONAR sentence embeddings.)
- Dr. Ashish Bamania, Dec 2024, Meta’s Large Concept Models (LCMs) Are Here To Challenge And Redefine LLMs: A deep dive into ‘Large Concept Model’, a novel language processing architecture and evaluating its performance against state-of-the-art LLMs, https://levelup.gitconnected.com/metas-large-concept-models-lcms-are-here-to-challenge-and-redefine-llms-7f9778f88a87
- Hussain Ahmad, Diksha Goel, 8 Jan 2025, The Future of AI: Exploring the Potential of Large Concept Models, https://arxiv.org/abs/2501.05487
- Giuliano Liguori, Jan 2025, Large Concept Models (LCM): A New Frontier in AI Beyond Token-Level Language Models, https://www.linkedin.com/pulse/large-concept-models-lcm-new-frontier-ai-beyond-giuliano-liguori--dnj3f/
- Jihoon Tack, Jack Lanchantin, Jane Yu, Andrew Cohen, Ilia Kulikov, Janice Lan, Shibo Hao, Yuandong Tian, Jason Weston, Xian Li, 12 Feb 2025, LLM Pretraining with Continuous Concepts, https://arxiv.org/abs/2502.08524
- Vishal Rajput, Feb 2025, Forget LLMs, It’s Time For Large Concept Models (LCMs), https://medium.com/aiguys/forget-llms-its-time-for-large-concept-models-lcms-05b75fe43185
- Towards Practical Concept-Based Language Models: An Efficiency-Focused Implementation Vivek K. Tiwari, 2025, https://www.researchgate.net/profile/Vivek-Tiwari-41/publication/388753941_Towards_Practical_Concept-Based_Language_Models_An_Efficiency-Focused_Implementation/links/67a4bf86461fb56424cc6b62/Towards-Practical-Concept-Based-Language-Models-An-Efficiency-Focused-Implementation.pdf
- Datacamp, Feb 21, 2025, Large Concept Models: A Guide With Examples: Learn what large concept models are, how they differ from LLMs, and how their architecture leads to improvements in language processing, https://www.datacamp.com/blog/large-concept-models
- Mehul Gupta, Jan 5, 2025, Meta Large Concept Models (LCM): End of LLMs? What are LCMs and how is LCM different from LLMs, https://medium.com/data-science-in-your-pocket/meta-large-concept-models-lcm-end-of-llms-68cb0c5cd5cf
- By AI Papers Academy, 3 January 2025, Large Concept Models (LCMs) by Meta: The Era of AI After LLMs? https://aipapersacademy.com/large-concept-models/
- Andrea Viliotti, 20 Dec 2024, Large Concept Model (LCM): a new paradigm for large-scale semantic reasoning in AI, https://www.andreaviliotti.it/post/large-concept-model-lcm-a-new-paradigm-for-large-scale-semantic-reasoning-in-ai
- Leadership in AI, January, 2025, Meta’s stunning LCM large concept models for artificial intelligence — they are thinking now! https://www.youtub e.com/watch?v=u Z3HCw8ApQ,
- Lance Eliot, Jan 06, 2025, AI Is Breaking Free Of Token-Based LLMs By Upping The Ante To Large Concept Models That Devour Sentences And Adore Concepts, https://www.forbes.com/sites/lanceeliot/2025/01/06/ai-is-breaking-free-of-token-based-llms-by-upping-the-ante-to-large-concept-models-that-devour-sentences-and-adore-concepts/
- Zen the innovator, Jan 5, 2025, Large Concept Models (LCMs), https://medium.com/@ThisIsMeIn360VR/large-concept-models-lcms-d59b86531ef6
- Debabrata Pruseth, Jan 2025, LCMs: Large Concept Models – The Path to AGI ( Artificial General Intelligence) & The Future of AI Thinking, https://debabratapruseth.com/lcms-large-concept-models-the-path-to-agi-the-future-of-ai-thinking/
- Asif Razzaq, December 15, 2024, Meta AI Proposes Large Concept Models (LCMs): A Semantic Leap Beyond Token-based Language Modeling, https://www.marktechpost.com/2024/12/15/meta-ai-proposes-large-concept-models-lcms-a-semantic-leap-beyond-token-based-language-modeling/
- Aniket Hingane, Dec 27, 2024, Practical Advancements in AI: How Large Concept Models Are Redefining the Landscape of LLMs, https://medium.com/@learn-simplified/practical-advancements-in-ai-how-large-concept-models-are-redefining-the-landscape-of-llms-b0220296458b
- Siddhant Rai and Vizuara AI, Dec 30, 2024, Large Concept models : Language Modeling in a Sentence Representation Space: Re-imagining the core principles behind representation generation in foundation model, https://vizuara.substack.com/p/large-concept-models-language-modeling?
- J Liao, R Xie, S Li, X Wang, X Sun, Z Kang, X He, 2025, Multi-Grained Patch Training for Efficient LLM-based Recommendation, https://hexiangnan.github.io/papers/sigir25-PatchRec.pdf
- Ignacio de Gregorio, June 2025, What If We Are All Wrong About AI? The contrarian bet by Meta, in plain English, https://medium.com/@ignacio.de.gregorio.noblejas/what-if-we-are-all-wrong-about-ai-f33a3c64055c
- Tomek Korbak, Mikita Balesni, (and many more authors) July 2025, Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety, https://tomekkorbak.com/cot-monitorability-is-a-fragile-opportunity/cot_monitoring.pdf
- Sebastian Raschka, Mar 8, 2025, Inference-Time Compute Scaling Methods to Improve Reasoning Models: Part 1: Inference-Time Compute Scaling Methods, https://sebastianraschka.com/blog/2025/state-of-llm-reasoning-and-inference-scaling.html
- Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Tom Goldstein, 17 Feb 2025 (v2), Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach, https://arxiv.org/abs/2502.05171
- Sicheng Feng, Gongfan Fang, Xinyin Ma, Xinchao Wang, 15 Apr 2025, Efficient Reasoning Models: A Survey, https://arxiv.org/abs/2504.10903
AI Books from Aussie AI
![]() |
The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
Get your copy from Amazon: The Sweetest Lesson |
![]() |
RAG Optimization: Accurate and Efficient LLM Applications:
new book on RAG architectures:
Get your copy from Amazon: RAG Optimization |
![]() |
Generative AI Applications book:
Get your copy from Amazon: Generative AI Applications |
![]() |
Generative AI programming book:
Get your copy from Amazon: Generative AI in C++ |
![]() |
CUDA C++ Optimization book:
Get your copy from Amazon: CUDA C++ Optimization |
![]() |
CUDA C++ Debugging book:
Get your copy from Amazon: CUDA C++ Debugging |
More AI Research
Read more about: