Aussie AI
Fast and Slow Reasoning
-
Last Updated 27 August, 2025
-
by David Spuler, Ph.D.
Research on Fast and Slow Reasoning
Research papers include:
- Jiabao Pan, Yan Zhang, Chen Zhang, Zuozhu Liu, Hongwei Wang, Haizhou Li, 1 Jul 2024, DynaThink: Fast or Slow? A Dynamic Decision-Making Framework for Large Language Models, https://arxiv.org/abs/2407.01009
- Xiaoyu Tian, Liangyu Chen, Na Liu, Yaxuan Liu, Wei Zou, Kaijiang Chen, Ming Cui, 24 Nov 2023 (v4), DUMA: a Dual-Mind Conversational Agent with Fast and Slow Thinking, https://arxiv.org/abs/2310.18075
- Daniele Paliotta, Junxiong Wang, Matteo Pagliardini, Kevin Y. Li, Aviv Bick, J. Zico Kolter, Albert Gu, François Fleuret, Tri Dao, 27 Feb 2025, Thinking Slow, Fast: Scaling Inference Compute with Distilled Reasoners, https://arxiv.org/abs/2502.20339
- Jianyuan Zhong, Zeju Li, Zhijian Xu, Xiangyu Wen, Qiang Xu, 16 Feb 2025, Dyve: Thinking Fast and Slow for Dynamic Process Verification, https://arxiv.org/abs/2502.11157
- Xiaoxue Cheng, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen, 3 Jan 2025 (v2), Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process of Fast and Slow Thinking, https://arxiv.org/abs/2501.01306
- Kangan Qian, Zhikun Ma, Yangfan He, Ziang Luo, Tianyu Shi, Tianze Zhu, Jiayin Li, Jianhui Wang, Ziyu Chen, Xiao He, Yining Shi, Zheng Fu, Xinyu Jiao, Kun Jiang, Diange Yang, Takafumi Matsumaru, 27 Nov 2024, FASIONAD : FAst and Slow FusION Thinking Systems for Human-Like Autonomous Driving with Adaptive Feedback, https://arxiv.org/abs/2411.18013
- Ming Li, Yanhong Li, Tianyi Zhou, 31 Oct 2024, What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective, https://arxiv.org/abs/2410.23743
- DiJia Su, Sainbayar Sukhbaatar, Michael Rabbat, Yuandong Tian, Qinqing Zheng, 13 Oct 2024, Dualformer: Controllable Fast and Slow Thinking by Learning with Randomized Reasoning Traces, https://arxiv.org/abs/2410.09918
- Konstantina Christakopoulou, Shibl Mourad, Maja Matarić, 10 Oct 2024, Agents Thinking Fast and Slow: A Talker-Reasoner Architecture, https://arxiv.org/abs/2410.08328
- Zhiheng Lyu, Zhijing Jin, Fernando Gonzalez, Rada Mihalcea, Bernhard Schölkopf, Mrinmaya Sachan, 27 Oct 2024 (v2), Do LLMs Think Fast and Slow? A Causal Study on Sentiment Analysis, https://arxiv.org/abs/2404.11055
- Biqing Qi, Xingquan Chen, Junqi Gao, Dong Li, Jianxing Liu, Ligang Wu, Bowen Zhou, 19 Mar 2024 (v2), Interactive Continual Learning: Fast and Slow Thinking, https://arxiv.org/abs/2403.02628
- Pengbo Hu, Ji Qi, Xingyu Li, Hong Li, Xinqi Wang, Bing Quan, Ruiyu Wang, Yi Zhou, 21 Aug 2023 (v2), Tree-of-Mixed-Thought: Combining Fast and Slow Thinking for Multi-hop Visual Reasoning, https://arxiv.org/abs/2308.09658
- Thilo Hagendorff, Sarah Fabi, Michal Kosinski, 2 Aug 2023 (v2), Thinking Fast and Slow in Large Language Models, https://arxiv.org/abs/2212.05206
- Wenlin Yao, Haitao Mi, Dong Yu, 25 Sep 2024, HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows, https://arxiv.org/abs/2409.17433
- Fei Tang, Yongliang Shen, Hang Zhang, Siqi Chen, Guiyang Hou, Wenqi Zhang, Wenqiao Zhang, Kaitao Song, Weiming Lu, Yueting Zhuang, 9 Mar 2025, Think Twice, Click Once: Enhancing GUI Grounding via Fast and Slow Systems, https://arxiv.org/abs/2503.06470
- Guan Wang, Jin Li, Yuhao Sun, Xing Chen, Changling Liu, Yue Wu, Meng Lu, Sen Song, Yasin Abbasi Yadkori, 22 Jul 2025 (v2), Hierarchical Reasoning Model, https://arxiv.org/abs/2506.21734 https://github.com/sapientinc/HRM
- Ben Dickson, July 25, 2025, New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples, https://venturebeat.com/ai/new-ai-architecture-delivers-100x-faster-reasoning-than-llms-with-just-1000-training-examples/
- Jason Zhu, Hongyu Li, 13 Jul 2025, Towards Concise and Adaptive Thinking in Large Reasoning Models: A Survey, https://arxiv.org/abs/2507.09662
- Qianjun Pan, Wenkai Ji, Yuyang Ding, Junsong Li, Shilian Chen, Junyi Wang, Jie Zhou, Qin Chen, Min Zhang, Yulan Wu, Liang He, 8 May 2025 (v2), A Survey of Slow Thinking-based Reasoning LLMs using Reinforced Learning and Inference-time Scaling Law, https://arxiv.org/abs/2505.02665
- Chang Xiao, Brenda Yang, 23 Jul 2025, Streaming, Fast and Slow: Cognitive Load-Aware Streaming for Efficient LLM Serving, https://arxiv.org/abs/2504.17999
- Lori Dajose, December 17, 2024, Thinking Slowly: The Paradoxical Slowness of Human Behavior, https://www.caltech.edu/about/news/thinking-slowly-the-paradoxical-slowness-of-human-behavior
- Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, Yingying Zhang, Fei Yin, Jiahua Dong, Zhijiang Guo, Le Song, Cheng-Lin Liu, 25 Feb 2025 (v2), From System 1 to System 2: A Survey of Reasoning Large Language Models, https://arxiv.org/abs/2502.17419
- Zhihao Dou and Dongfei Cui and Jun Yan and Weida Wang and Benteng Chen and Haoming Wang and Zeke Xie and Shufei Zhang, 25 Aug 2025, DSADF: Thinking Fast and Slow for Decision Making, https://arxiv.org/abs/2505.08189
AI Books from Aussie AI
![]() |
The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
Get your copy from Amazon: The Sweetest Lesson |
![]() |
RAG Optimization: Accurate and Efficient LLM Applications:
new book on RAG architectures:
Get your copy from Amazon: RAG Optimization |
![]() |
Generative AI Applications book:
Get your copy from Amazon: Generative AI Applications |
![]() |
Generative AI programming book:
Get your copy from Amazon: Generative AI in C++ |
![]() |
CUDA C++ Optimization book:
Get your copy from Amazon: CUDA C++ Optimization |
![]() |
CUDA C++ Debugging book:
Get your copy from Amazon: CUDA C++ Debugging |
More AI Research Topics
Read more about:
- 500+ LLM Inference Optimization Techniques
- What's Hot in LLM Inference Optimization in 2025?
- Inference Optimization Research
- « Research Home