Aussie AI

Prompt Engineering: Types and Optimizations

  • Last Updated 1 January, 2026
  • by David Spuler, Ph.D.

Optimizing Prompt Engineering

There are various simple ways to get better results from LLMs with prompt engineering techniques with a single prompt:

  • Be specific
  • Give examples
  • Write longer prompts

Some more advanced approaches include:

  • Give multiple examples (few-shot prompting)
  • Negative prompting (tell the AI what not to do)
  • Personas
  • Chain-of-thought ("step-by-step" requests)
  • Specify an output format
  • Specify a tone, reading level, or other text meta-attribute.

There are various ways to follow up with additional prompts:

  • Iterative prompting (improve the next prompt based on the previous answer)
  • Ask the LLM to explain its reasoning
  • Ask the LLM to evaluate its own answer ("reflection")

Types of Prompt Engineering

The general categories of prompt engineering techniques are:

  • Zero-shot prompting — no examples.
  • One-shot prompting — one example.
  • Few-shot prompting — multiple examples in the prompt.

There are various known effective ways to improve the results in terms of answer accuracy/perplexity using prompt engineering:

  • Emotional prompting
  • "Step-by-step" prompting (zero-shot CoT)
  • Skeleton-of-thought
  • Chain-of-Thought (CoT) (few-shot)
  • Tree-of-Thought (ToT)

Surveys on Prompting Techniques

Survey papers on prompt engineering:

  • Sander Schulhoff, Michael Ilie, Nishant Balepur, Konstantine Kahadze, Amanda Liu, Chenglei Si, Yinheng Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff, Pranav Sandeep Dulepet, Saurav Vidyadhara, Dayeon Ki, Sweta Agrawal, Chau Pham, Gerson Kroiz, Feileen Li, Hudson Tao, Ashay Srivastava, Hevander Da Costa, Saloni Gupta, Megan L. Rogers, Inna Goncearenco, Giuseppe Sarli, Igor Galynker, Denis Peskoff, Marine Carpuat, Jules White, Shyamal Anadkat, Alexander Hoyle, Philip Resnik, 6 Jun 2024, The Prompt Report: A Systematic Survey of Prompting Techniques, https://arxiv.org/abs/2406.06608
  • Xiaoxia Liu, Jingyi Wang, Jun Sun, Xiaohan Yuan, Guoliang Dong, Peng Di, Wenhai Wang, Dongxia Wang, 21 Nov 2023, Prompting Frameworks for Large Language Models: A Survey, https://arxiv.org/abs/2311.12785
  • Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, Aman Chadha, 5 Feb 2024, A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications, https://arxiv.org/abs/2402.07927
  • Yuan-Feng Song, Yuan-Qin He, Xue-Fang Zhao, Han-Lin Gu, Di Jiang, Hai-Jun Yang, Li-Xin Fan, July 2024, A communication theory perspective on prompting engineering methods for large lan guage models. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 39(4): 984−1004 July 2024. DOI: 10.1007/s11390-024-4058-8, https://doi.org/10.1007/s11390-024-4058-8 https://jcst.ict.ac.cn/en/article/pdf/preview/10.1007/s11390-024-4058-8.pdf
  • Vishal Rajput, Oct 2024, The Prompt Report: Prompt Engineering Techniques, https://medium.com/aiguys/the-prompt-report-prompt-engineering-techniques-254464b0b32b

Emotional Prompting

Researchers discovered a weird technique: adding emotion to prompts makes LLMs do better. It's unclear how or why this works, but perhaps it triggers more attention paid to more important sources (i.e., tokens and weights), or perhaps it reduces attention to casual type documents.

Research papers on emotional prompting:

  • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie, 12 Nov 2023 (v7), Large Language Models Understand and Can be Enhanced by Emotional Stimuli, https://arxiv.org/abs/2307.11760 https://llm-enhance.github.io/
  • Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, Aman Chadha, 5 Feb 2024, A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications, https://arxiv.org/abs/2402.07927
  • Chenggian Ma, Xiangyu Zhao, Chunhui Zhang, Yanzhao Qin, Wentao Zhang, 16 Apr 2024, When Emotional Stimuli meet Prompt Designing: An Auto-Prompt Graphical Paradigm, https://arxiv.org/abs/2404.10500
  • Yarik Menchaca Resendiz, Roman Klinger, 9 Aug 2023, Emotion-Conditioned Text Generation through Automatic Prompt Optimization, https://arxiv.org/abs/2308.04857
  • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie, 7 Jun 2024 (v3), The Good, The Bad, and Why: Unveiling Emotions in Generative AI, https://arxiv.org/abs/2312.11111
  • Mike Taylor Oct 29, 2024, Five proven prompt engineering techniques (and a few more-advanced tactics), https://www.lennysnewsletter.com/p/five-proven-prompt-engineering-techniques
  • Ziqi Yin, Hao Wang, Kaito Horio, Daisuke Kawahara, Satoshi Sekine, 14 Oct 2024 (v2), Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance, https://arxiv.org/abs/2402.14531
  • Guanrou Yang, Chen Yang, Qian Chen, Ziyang Ma, Wenxi Chen, Wen Wang, Tianrui Wang, Yifan Yang, Zhikang Niu, Wenrui Liu, Fan Yu, Zhihao Du, Zhifu Gao, ShiLiang Zhang, Xie Chen, 13 Aug 2025, EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting, https://arxiv.org/abs/2504.12867

Chain-of-Thought (CoT)

Chain-of-thought prompting is a "step-by-step" prompting method. As a zero-shot technique, it involves just adding an encouragement like "Let's try this step-by-step." to the prompt given to the LLM. As a few-shot prompting technique, it can involve running the LLM through multiple steps to finalize an answer.

Research papers on chain-of-thought:

  • Jacob Pfau, William Merrill, Samuel R. Bowman, 24 Apr 2024, Let's Think Dot by Dot: Hidden Computation in Transformer Language Models, https://arxiv.org/abs/2404.15758
  • Hongxuan Zhang, Zhining Liu, Jiaqi Zheng, Chenyi Zhuang, Jinjie Gu, Guihai Chen, Nov 2023, Fast Chain-of-Thought: A Glance of Future from Parallel Decoding Leads to Answers Faster, https://arxiv.org/abs/2311.08263
  • Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe, May 2023, Let's Verify Step by Step, https://arxiv.org/abs/2305.20050
  • Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, Min Lin, 13 Jun 2024, Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs, https://arxiv.org/abs/2406.09136 Code: https://github.com/sail-sg/CPO
  • kipply's blog, 2023-03-30, Transformer Taxonomy (the last lit review), https://kipp.ly/transformer-taxonomy/ (Papers for all the Transformer architectures and milestone papers for the major optimization improvements on them.)
  • Daniel Lopes, June 21, 2024, A Comprehensive Guide to Text Prompt Engineering Techniques, https://journal.daniellopes.dev/p/practical-prompt-engineering-notes
  • Wenxiao Wang, Wei Chen, Yicong Luo, Yongliu Long, Zhengkai Lin, Liye Zhang, Binbin Lin, Deng Cai, Xiaofei He, 15 Feb 2024, Model Compression and Efficient Inference for Large Language Models: A Survey, https://arxiv.org/abs/2402.09748
  • Hao Zhou, Chengming Hu, Ye Yuan, Yufei Cui, Yili Jin, Can Chen, Haolun Wu, Dun Yuan, Li Jiang, Di Wu, Xue Liu, Charlie Zhang, Xianbin Wang, Jiangchuan Liu, 17 May 2024, Large Language Model (LLM) for Telecommunications: A Comprehensive Survey on Principles, Key Techniques, and Opportunities, https://arxiv.org/abs/2405.10825
  • Yu Wang, Shiwan Zhao, Zhihu Wang, Heyuan Huang, Ming Fan, Yubo Zhang, Zhixing Wang, Haijun Wang, Ting Liu, 5 Sep 2024, Strategic Chain-of-Thought: Guiding Accurate Reasoning in LLMs through Strategy Elicitation, https://arxiv.org/abs/2409.03271
  • Asankhaya Sharma (codelion), Sep 2024, Optillm: Optimizing inference proxy for LLMs, https://github.com/codelion/optillm
  • Ziqi Jin, Wei Lu, 6 Sep 2024, Self-Harmonized Chain of Thought, https://arxiv.org/abs/2409.04057
  • Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, Aman Chadha, 5 Feb 2024, A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications, https://arxiv.org/abs/2402.07927
  • Shizhe Diao, Pengcheng Wang, Yong Lin, Rui Pan, Xiang Liu, Tong Zhang, 21 Jul 2024 (v5), Active Prompting with Chain-of-Thought for Large Language Models, https://arxiv.org/abs/2302.12246 https://github.com/shizhediao/active-prompt
  • Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola, 7 Oct 2022, Automatic Chain of Thought Prompting in Large Language Models, https://arxiv.org/abs/2210.03493 https://github.com/amazon-research/auto-cot
  • Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Guangyuan Piao, Nils Blach, Piotr Nyczyk, Marcin Copik, Grzegorz Kwaśniewski, Jürgen Müller, Lukas Gianinazzi, Ales Kubicek, Hubert Niewiadomski, Aidan O'Mahony, Onur Mutlu, Torsten Hoefler, 5 Apr 2024, Demystifying Chains, Trees, and Graphs of Thoughts, https://arxiv.org/abs/2401.14295 http://htor.ethz.ch/publications/img/besta-topologies.pdf
  • Louis Bouchard, Sep 12, 2024, OpenAI's o1 Model: The Future of Reasoning AI? What Sets It Apart, How OpenAI's o1 Model Thinks Through Problems (And Why It's Slower), https://www.louisbouchard.ai/openai-o1/
  • OpenAI, September 12, 2024, Learning to Reason with LLMs, https://openai.com/index/learning-to-reason-with-llms/
  • Emilia David, September 12, 2024, How to prompt on OpenAI’s new o1 models, https://venturebeat.com/ai/how-to-prompt-on-openai-o1/ (Prompt engineering is different for o1, such as "don't use chain of thought.")
  • Du Phan, Matthew D. Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, Rif A. Saurous, 28 Nov 2023, Training Chain-of-Thought via Latent-Variable Inference, https://arxiv.org/abs/2312.02179
  • Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, Hang Li, 27 Jun 2024 (v2), ReFT: Reasoning with Reinforced Fine-Tuning, https://arxiv.org/abs/2401.08967
  • Tianqiao Liu, Zui Chen, Zitao Liu, Mi Tian, Weiqi Luo, 13 Sep 2024, Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding, https://arxiv.org/abs/2409.08561
  • Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, Greg Durrett, 18 Sep 2024, To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning, https://arxiv.org/abs/2409.12183
  • Santosh Kumar Radha, Yasamin Nouri Jelyani, Ara Ghukasyan, Oktay Goktas, 19 Sep 2024, Iteration of Thought: Leveraging Inner Dialogue for Autonomous Large Language Model Reasoning, https://arxiv.org/abs/2409.12618
  • Artem Shelamanov, Sep 2024, Why OpenAI’s o1 Model Is A Scam, https://pub.towardsai.net/why-openais-o1-model-is-a-scam-eb3356c3d70e
  • Chung-Yu Wang, Alireza DaghighFarsoodeh, Hung Viet Pham, 24 Sep 2024, Task-oriented Prompt Enhancement via Script Generation, https://arxiv.org/abs/2409.16418
  • Cassandra A. Cohen, William W. Cohen, 17 Sep 2024, Watch Your Steps: Observable and Modular Chains of Thought, https://arxiv.org/abs/2409.15359
  • Tongxuan Liu, Wenjiang Xu, Weizhe Huang, Xingyu Wang, Jiaxing Wang, Hailong Yang, Jing Li, 26 Sep 2024, Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models, https://arxiv.org/abs/2409.17539
  • Zhenwen Liang, Ye Liu, Tong Niu, Xiangliang Zhang, Yingbo Zhou, Semih Yavuz, 5 Oct 2024, Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification, https://arxiv.org/abs/2410.05318
  • Qiguang Chen, Libo Qin, Jiaqi Wang, Jinxuan Zhou, Wanxiang Che, 8 Oct 2024, Unlocking the Boundaries of Thought: A Reasoning Granularity Framework to Quantify and Optimize Chain-of-Thought, https://arxiv.org/abs/2410.05695 https://github.com/LightChen233/reasoning-granularity
  • Yingqian Cui, Pengfei He, Xianfeng Tang, Qi He, Chen Luo, Jiliang Tang, Yue Xing, 21 Oct 2024, A Theoretical Understanding of Chain-of-Thought: Coherent Reasoning and Error-Aware Demonstration, https://arxiv.org/abs/2410.16540
  • Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, Shengxin Zhu, 5 Sep 2024 (v5), Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review, https://arxiv.org/abs/2310.14735
  • Data Camp, Jul 10, 2024, Chain-of-Thought Prompting: Step-by-Step Reasoning with LLMs, https://www.datacamp.com/tutorial/chain-of-thought-prompting
  • Pankaj, Dec 21, 2023, Chain of Thought Prompting: Guiding LLMs Step-by-Step, https://medium.com/@pankaj_pandey/chain-of-thought-prompting-guiding-llms-step-by-step-e6eac32d02d8
  • Jason Wei and Denny Zhou, May 11, 2022, Language Models Perform Reasoning via Chain of Thought, https://research.google/blog/language-models-perform-reasoning-via-chain-of-thought/
  • Cameron R. Wolfe, Jul 24, 2023, Chain of Thought Prompting for LLMs: A practical and simple approach for “reasoning” with LLMs, https://towardsdatascience.com/chain-of-thought-prompting-for-llms-33c963eead38
  • Siwei Wu, Zhongyuan Peng, Xinrun Du, Tuney Zheng, Minghao Liu, Jialong Wu, Jiachen Ma, Yizhi Li, Jian Yang, Wangchunshu Zhou, Qunshu Lin, Junbo Zhao, Zhaoxiang Zhang, Wenhao Huang, Ge Zhang, Chenghua Lin, J.H. Liu, 22 Oct 2024 (v2), A Comparative Study on Reasoning Patterns of OpenAI's o1 Model, https://arxiv.org/abs/2410.13639
  • Tanay Jaipuria, Oct 29, 2024, OpenAI's o-1 and inference-time scaling laws, https://www.tanayj.com/p/openais-o-1-and-inference-time-scaling
  • Junda Wu, Xintong Li, Ruoyu Wang, Yu Xia, Yuxin Xiong, Jianing Wang, Tong Yu, Xiang Chen, Branislav Kveton, Lina Yao, Jingbo Shang, Julian McAuley, 31 Oct 2024, OCEAN: Offline Chain-of-thought Evaluation and Alignment in Large Language Models, https://arxiv.org/abs/2410.23703
  • Siyun Zhao, Yuqing Yang, Zilong Wang, Zhiyuan He, Luna K. Qiu, Lili Qiu, 23 Sep 2024, Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely, https://arxiv.org/abs/2409.14924
  • Guowei Xu, Peng Jin, Li Hao, Yibing Song, Lichao Sun, Li Yuan, 15 Nov 2024, LLaVA-o1: Let Vision Language Models Reason Step-by-Step, https://arxiv.org/abs/2411.10440
  • Carl Franzen, November 20, 2024, DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance, https://venturebeat.com/ai/deepseeks-first-reasoning-model-r1-lite-preview-turns-heads-beating-openai-o1-performance/
  • Yu Zhao, Huifeng Yin, Bo Zeng, Hao Wang, Tianqi Shi, Chenyang Lyu, Longyue Wang, Weihua Luo, Kaifu Zhang, 21 Nov 2024, Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions, https://arxiv.org/abs/2411.14405
  • Jun Gao, Yongqi Li, Ziqiang Cao, Wenjie Li, 29 Nov 2024, Interleaved-Modal Chain-of-Thought, https://arxiv.org/abs/2411.19488 (Using CoT on a multimodal/vision model.)
  • Hieu Tran, Zonghai Yao, Junda Wang, Yifan Zhang, Zhichao Yang, Hong Yu, 5 Dec 2024 (v2), RARE: Retrieval-Augmented Reasoning Enhancement for Large Language Models, https://arxiv.org/abs/2412.02830
  • Tiernan Ray, Dec. 10, 2024, How Cerebras boosted Meta's Llama to 'frontier model' performance The company also demonstrates initial training of a one-trillion-parameter AI model on a single machine using conventional DDR5 memory chips. https://www.zdnet.com/article/how-cerebras-boosted-metas-llama-to-frontier-model-performance/
  • Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, Yuandong Tian, 9 Dec 2024, Training Large Language Models to Reason in a Continuous Latent Space, https://arxiv.org/abs/2412.06769
  • Ben Dickson, December 10, 2024, OpenAI’s o1 model doesn’t show its thinking, giving open source an advantage, https://venturebeat.com/ai/heres-how-openai-o1-might-lose-ground-to-open-source-models/
  • Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, Lixin Gu, Xuehui Wang, Qingyun Li, Yimin Ren, Zixuan Chen, Jiapeng Luo, Jiahao Wang, Tan Jiang, Bo Wang, Conghui He, Botian Shi, Xingcheng Zhang, Han Lv, Yi Wang, Wenqi Shao, Pei Chu, Zhongying Tu, Tong He, Zhiyong Wu, Huipeng Deng, Jiaye Ge, Kai Chen, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, Wenhai Wang, 6 Dec 2024, Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling, https://arxiv.org/abs/2412.05271
  • Jiaqi Zhang, Chen Gao, Liyuan Zhang, Yong Li, Hongzhi Yin, 10 Dec 2024, SmartAgent: Chain-of-User-Thought for Embodied Personalized Agent in Cyber World, https://arxiv.org/abs/2412.07472 https://github.com/tsinghua-fib-lab/SmartAgent
  • Kyle Wiggers, December 14, 2024, ‘Reasoning’ AI models have become a trend, for better or worse, https://techcrunch.com/2024/12/14/reasoning-ai-models-have-become-a-trend-for-better-or-worse/
  • Alberto Romero, Dec 21, 2024, OpenAI o3 Model Is a Message From the Future: Update All You Think You Know About AI. Incredible, a miracle, more than just a better state-of-the-art AI model. https://www.thealgorithmicbridge.com/p/openai-o3-model-is-a-message-from
  • Sabrina Ortiz, Dec. 20, 2024, OpenAI unveils its most advanced o3 reasoning model on its last day of 'shipmas', https://www.zdnet.com/article/openai-unveils-its-most-advanced-o3-reasoning-model-on-its-last-day-of-shipmas/
  • Tyler McDonald, Anthony Colosimo, Yifeng Li, Ali Emami, 2 Dec 2024, Can We Afford The Perfect Prompt? Balancing Cost and Accuracy with the Economical Prompting Index, https://arxiv.org/abs/2412.01690
  • Jiaxiang Liu, Yuan Wang, Jiawei Du, Joey Tianyi Zhou, Zuozhu Liu, 18 Dec 2024, MedCoT: Medical Chain of Thought via Hierarchical Expert, https://arxiv.org/abs/2412.13736
  • Changyue Wang, Weihang Su, Qingyao Ai, Yiqun Liu, 23 Dec 2024, Knowledge Editing through Chain-of-Thought, https://arxiv.org/abs/2412.17727 https://github.com/bebr2/EditCoT
  • Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan, 3 Dec 2023 (v2), Tree of Thoughts: Deliberate Problem Solving with Large Language Models, https://arxiv.org/abs/2305.10601 Code: https://github.com/princeton-nlp/tree-of-thought-llm
  • Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, 10 Jan 2023 (v6), Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. https://arxiv.org/abs/2201.11903
  • Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa, 29 Jan 2023 (v4), Large Language Models are Zero-Shot Reasoners, https://arxiv.org/abs/2205.11916 https://github.com/kojima-takeshi188/zero_shot_cot ("Let's think step by step" prepended to every prompt for a type of zero-shot CoT.)
  • Xuezhi Wang, Denny Zhou, 23 May 2024 (v2), Chain-of-Thought Reasoning Without Prompting, https://arxiv.org/abs/2402.10200 ("CoT decoding" is examining the alternative paths in the decoding algorithm, which is somewhat similar to Chain-of-Thought reasoning.)
  • xjdr-alt, Dec 2024, entropix: Entropy Based Sampling and Parallel CoT Decoding, https://github.com/xjdr-alt/entropix (Parallel decoding attempts to get something similar to CoT.)
  • Huanjin Yao, Jiaxing Huang, Wenhao Wu, Jingyi Zhang, Yibo Wang, Shunyu Liu, Yingjie Wang, Yuxin Song, Haocheng Feng, Li Shen, Dacheng Tao, 24 Dec 2024, Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search, https://arxiv.org/abs/2412.18319 https://github.com/HJYao00/Mulberry (Multimodal multi-step reasoning like CoT.)
  • Xiangjue Dong, Maria Teleki, James Caverlee, 18 Dec 2024, A Survey on LLM Inference-Time Self-Improvement, https://arxiv.org/abs/2412.14352 https://github.com/dongxiangjue/Awesome-LLM-Self-Improvement (Broad survey of reasoning improvement methods from multi-step inference to RALM to decoding algorithms.)
  • Jiaan Wang, Fandong Meng, Yunlong Liang, Jie Zhou, 23 Dec 2024, DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought, https://arxiv.org/abs/2412.17498 https://github.com/krystalan/DRT-o1 (Examines similes and metaphors in literature using long CoT.)
  • Jiacheng Ye, Shansan Gong, Liheng Chen, Lin Zheng, Jiahui Gao, Han Shi, Chuan Wu, Xin Jiang, Zhenguo Li, Wei Bi, Lingpeng Kong, 5 Dec 2024 (v3), Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models, https://arxiv.org/abs/2402.07754
  • Shiv Sakhuja, 25 Sep 2024, Chain-of-Thought (CoT) Prompting Explained: 7 Techniques for Optimizing AI Performance, https://hub.athina.ai/athina-originals/guides-chain-of-thought-cot-prompting-explained-7-techniques-for-optimizing-ai-performance/
  • Aryasomayajula Ram Bharadwaj, 5 Dec 2024, Understanding Hidden Computations in Chain-of-Thought Reasoning, https://arxiv.org/abs/2412.04537
  • Aske Plaat, Annie Wong, Suzan Verberne, Joost Broekens, Niki van Stein, Thomas Back, 16 Jul 2024, Reasoning with Large Language Models, a Survey, https://arxiv.org/abs/2407.11511
  • Cheng Yang, Chufan Shi, Siheng Li, Bo Shui, Yujiu Yang, Wai Lam, 29 Dec 2024, LLM2: Let Large Language Models Harness System 2 Reasoning, https://arxiv.org/abs/2412.20372
  • Mayi Xu, Yunfeng Ning, Yongqi Li, Jianhao Chen, Jintao Wen, Yao Xiao, Shen Zhou, Birong Pan, Zepeng Bao, Xin Miao, Hankun Kang, Ke Sun, Tieyun Qian, 2 Jan 2025, Reasoning based on symbolic and parametric knowledge bases: a survey, https://arxiv.org/abs/2501.01030 (Extensive survey of reasoning from CoT to knowledge graphs to table-based reasoning.)
  • Yixin Ji, Juntao Li, Hai Ye, Kaixin Wu, Jia Xu, Linjian Mo, Min Zhang, 5 Jan 2025, Test-time Computing: from System-1 Thinking to System-2 Thinking, https://arxiv.org/abs/2501.02497
  • Violet Xiang, Charlie Snell, Kanishk Gandhi, Alon Albalak, Anikait Singh, Chase Blagden, Duy Phung, Rafael Rafailov, Nathan Lile, Dakota Mahan, Louis Castricato, Jan-Philipp Franken, Nick Haber, Chelsea Finn, 8 Jan 2025, Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought, https://arxiv.org/abs/2501.04682
  • Andrea Matarazzo, Riccardo Torlone, 3 Jan 2025, A Survey on Large Language Models with some Insights on their Capabilities and Limitations, https://arxiv.org/abs/2501.04040 (Broad survey with many LLM topics covered from history to architectures to optimizations.)
  • Ziyang Ma, Zhuo Chen, Yuping Wang, Eng Siong Chng, Xie Chen, 13 Jan 2025, Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model, https://arxiv.org/abs/2501.07246
  • Tong Xiao, Jingbo Zhu, 16 Jan 2025, Foundations of Large Language Models, https://arxiv.org/abs/2501.09223 (Huge 230 page paper on many topics such as training, prompting, alignment, and long context.)
  • G Bao, H Zhang, C Wang, L Yang, Y Zhang, Jan 2025, How Likely Do LLMs with CoT Mimic Human Reasoning? Proceedings of the 31st International Conference on Computational Linguistics, pages 7831–7850, January 19–24, 2025, https://aclanthology.org/2025.coling-main.524.pdf
  • Son, M., Won, Y.-J., & Lee, S. (2025). Optimizing Large Language Models: A Deep Dive into Effective Prompt Engineering Techniques. Applied Sciences, 15(3), 1430. https://doi.org/10.3390/app15031430 https://www.mdpi.com/2076-3417/15/3/1430
  • Manish Sanwal, 3 Feb 2025 (v2), Layered Chain-of-Thought Prompting for Multi-Agent LLM Systems: A Comprehensive Approach to Explainable Large Language Models, https://arxiv.org/abs/2501.18645
  • Jianfeng Pan, Senyou Deng, Shaomang Huang, 4 Feb 2025, CoAT: Chain-of-Associated-Thoughts Framework for Enhancing Large Language Models Reasoning, https://arxiv.org/abs/2502.02390 (Integrating results from an "associative memory" in CoT reasoning paths at inference time.)
  • Avinash Patil, 5 Feb 2025, Advancing Reasoning in Large Language Models: Promising Methods and Approaches, https://arxiv.org/abs/2502.03671
  • Daniel Fleischer, Moshe Berchansky, Gad Markovits, Moshe Wasserblat, 13 Feb 2025, SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models, https://arxiv.org/abs/2502.09390 https://github.com/IntelLabs/RAG-FiT/tree/square
  • Komal Kumar, Tajamul Ashraf, Omkar Thawakar, Rao Muhammad Anwer, Hisham Cholakkal, Mubarak Shah, Ming-Hsuan Yang, Phillip H.S. Torr, Salman Khan, Fahad Shahbaz Khan, 28 Feb 2025, LLM Post-Training: A Deep Dive into Reasoning Large Language Models, https://arxiv.org/abs/2502.21321 https://github.com/mbzuai-oryx/Awesome-LLM-Post-training
  • Bin Hong, Jiayu Liu, Zhenya Huang, Kai Zhang, Mengdi Zhang, 13 Aug 2025, Pruning Long Chain-of-Thought of Large Reasoning Models via Small-Scale Preference Optimization, https://arxiv.org/abs/2508.10164
  • Ke Niu, Haiyang Yu, Zhuofan Chen, Mengyang Zhao, Teng Fu, Bin Li, Xiangyang Xue, 13 Aug 2025, From Intent to Execution: Multimodal Chain-of-Thought Reinforcement Learning for Precise CAD Code Generation, https://arxiv.org/abs/2508.10118
  • Ziyu Guo, Renrui Zhang, Chengzhuo Tong, Zhizheng Zhao, Rui Huang, Haoquan Zhang, Manyuan Zhang, Jiaming Liu, Shanghang Zhang, Peng Gao, Hongsheng Li, Pheng-Ann Heng, 23 Jul 2025, Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step, https://arxiv.org/abs/2501.13926
  • Ang Li, Charles Wang, Kaiyu Yue, Zikui Cai, Ollie Liu, Deqing Fu, Peng Guo, Wang Bill Zhu, Vatsal Sharan, Robin Jia, Willie Neiswanger, Furong Huang, Tom Goldstein, Micah Goldblum, 22 Jul 2025, Zebra-CoT: A Dataset for Interleaved Vision Language Reasoning, https://arxiv.org/abs/2507.16746
  • Hulayyil Alshammari, Praveen Rao, 23 Jul 2025, Evaluating the Performance of AI Text Detectors, Few-Shot and Chain-of-Thought Prompting Using DeepSeek Generated Text, https://arxiv.org/abs/2507.17944
  • Binbin Ji, Siddharth Agrawal, Qiance Tang, and Yvonne Wu, 6 Jul 2025, Enhancing Spatial Reasoning in Vision-Language Models via Chain-of-Thought Prompting and Reinforcement Learning, https://arxiv.org/abs/2507.13362
  • Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, Wanxiang Che, 18 Jul 2025, Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models, https://arxiv.org/abs/2503.09567
  • Lei Chen, Xuanle Zhao, Zhixiong Zeng, Jing Huang, Yufeng Zhong, Lin Ma, 21 Jul 2025, Chart-R1: Chain-of-Thought Supervision and Reinforcement for Advanced Chart Reasoner, https://arxiv.org/abs/2507.15509
  • Luyi Ma, Wanjia Zhang, Kai Zhao, Abhishek Kulkarni, Lalitesh Morishetti, Anjana Ganesh, Ashish Ranjan, Aashika Padmanabhan, Jianpeng Xu, Jason Cho, Praveen Kanumala, Kaushiki Nag, Sumit Dutta, Kamiya Motwani, Malay Patel, Evren Korpeoglu, Sushant Kumar, Kannan Achan, 19 Jul 2025, GRACE: Generative Recommendation via Journey-Aware Sparse Attention on Chain-of-Thought Tokenization, https://arxiv.org/abs/2507.14758
  • Hao Yang, Qinghua Zhao, Lei Li, 28 Jul 2025, How Chain-of-Thought Works? Tracing Information Flow from Decoding, Projection, and Activation, https://arxiv.org/abs/2507.20758
  • Eunkyu Park, Wesley Hanwen Deng, Gunhee Kim, Motahhare Eslami, Maarten Sap, 27 Jul 2025, Cognitive Chain-of-Thought: Structured Multimodal Reasoning about Social Situations, https://arxiv.org/abs/2507.20409
  • Xiangning Yu, Zhuohan Wang, Linyi Yang, Haoxuan Li, Anjie Liu, Xiao Xue, Jun Wang, Mengyue Yang, 26 Jul 2025, Causal Sufficiency and Necessity Improves Chain-of-Thought Reasoning, https://arxiv.org/abs/2506.09853
  • Ping Yu, Jack Lanchantin, Tianlu Wang, Weizhe Yuan, Olga Golovneva, Ilia Kulikov, Sainbayar Sukhbaatar, Jason Weston, Jing Xu, 31 Jul 2025, CoT-Self-Instruct: Building high-quality synthetic prompts for reasoning and non-reasoning tasks, https://arxiv.org/abs/2507.23751
  • Xi Chen, Aske Plaat, Niki van Stein, 24 Jul 2025, How does Chain of Thought Think? Mechanistic Interpretability of Chain-of-Thought Reasoning with Sparse Autoencoding, https://arxiv.org/abs/2507.22928
  • Shixin Yi, Lin Shang, 1 Aug 2025, CoRGI: Verified Chain-of-Thought Reasoning with Visual Grounding, https://arxiv.org/abs/2508.00378
  • Jianwei Wang, Ziming Wu, Fuming Lai, Shaobing Lian, Ziqian Zeng, 1 Aug 2025, SynAdapt: Learning Adaptive Reasoning in Large Language Models via Synthetic Continuous Chain-of-Thought, https://arxiv.org/abs/2508.00574
  • Chengshuai Zhao, Zhen Tan, Pingchuan Ma, Dawei Li, Bohan Jiang, Yancheng Wang, Yingzhen Yang, Huan Liu, 2 Aug 2025, Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens, https://arxiv.org/abs/2508.01191
  • Jialiang Hong, Taihang Zhen, Kai Chen, Jiaheng Liu, Wenpeng Zhu, Jing Huo, Yang Gao, Depeng Wang, Haitao Wan, Xi Yang, Boyan Wang, Fanyu Meng, 4 Aug 2025, Reconsidering Overthinking: Penalizing Internal and External Redundancy in CoT Reasoning, https://arxiv.org/abs/2508.02178
  • Chloe Li, Mary Phuong, Noah Y. Siegel, 31 Jul 2025, LLMs Can Covertly Sandbag on Capability Evaluations Against Chain-of-Thought Monitoring, https://arxiv.org/abs/2508.00943
  • Weibo Zhou, Lingbo Li, Shangsong Liang, 2 Aug 2025, D-SCoRE: Document-Centric Segmentation and CoT Reasoning with Structured Export for QA-CoT Data Generation, https://arxiv.org/abs/2508.01309
  • Fan Gao, Cheng Huang, Nyima Tashi, Yutong Liu, Xiangxiang Wang, Thupten Tsering, Ban Ma-bao, Renzeg Duojie, Gadeng Luosang, Rinchen Dongrub, Dorje Tashi, Xiao Feng, Hao Wang, Yongbin Yu, 4 Aug 2025, TIBSTC-CoT: A Multi-Domain Instruction Dataset for Chain-of-Thought Reasoning in Language Models, https://arxiv.org/abs/2508.01977
  • Huihan Li, You Chen, Siyuan Wang, Yixin He, Ninareh Mehrabi, Rahul Gupta, Xiang Ren, 4 Aug 2025, Diagnosing Memorization in Chain-of-Thought Reasoning, One Token at a Time, https://arxiv.org/abs/2508.02037
  • Hongbo Jin, Ruyang Liu, Wenhao Zhang, Guibo Luo, Ge Li, 3 Aug 2025, CoT-Vid: Dynamic Chain-of-Thought Routing with Self Verification for Training-Free Video Reasoning, https://arxiv.org/abs/2505.11830
  • Zeju Li, Jianyuan Zhong, Ziyang Zheng, Xiangyu Wen, Zhijian Xu, Yingying Cheng, Fan Zhang, Qiang Xu, 5 Aug 2025, Compressing Chain-of-Thought in LLMs via Step Entropy, https://arxiv.org/abs/2508.03346
  • Jueon Park, Yein Park, Minju Song, Soyon Park, Donghyeon Lee, Seungheun Baek and Jaewoo Kang, 5 Aug 2025, CoTox: Chain-of-Thought-Based Molecular Toxicity Reasoning and Prediction, https://arxiv.org/abs/2508.03159
  • Junyao Yang, Jianwei Wang, Huiping Zhuang, Cen Chen, Ziqian Zeng, 5 Aug 2025, RCP-Merging: Merging Long Chain-of-Thought Models with Domain-Specific Models by Considering Reasoning Capability as Prior, https://arxiv.org/abs/2508.03140
  • Weihua Zheng, Xin Huang, Zhengyuan Liu, Tarun Kumar Vangani, Bowei Zou, Xiyan Tao, Yuhao Wu, Ai Ti Aw, Nancy F. Chen, Roy Ka-Wei Lee, 5 Aug 2025, AdaMCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Multilingual Chain-of-Thought, https://arxiv.org/abs/2501.16154
  • Xingyu Chen, Junxiu An, Jun Guo, Li Wang, Jingcai Guo, 6 Aug 2025, KG-Augmented Executable CoT for Mathematical Coding, https://arxiv.org/abs/2508.04072
  • Xiao Wang, Liye Jin, Xufeng Lou, Shiao Wang, Lan Chen, Bo Jiang, Zhipeng Zhang, 7 Aug 2025, ReasoningTrack: Chain-of-Thought Reasoning for Long-term Vision-Language Tracking, https://arxiv.org/abs/2508.05221
  • Haonan Shangguan, Xiaocui Yang, Shi Feng, Daling Wang, Yifei Zhang, and Ge Yu, 7 Aug 2025, Resource-Limited Joint Multimodal Sentiment Reasoning and Classification via Chain-of-Thought Enhancement and Distillation, https://arxiv.org/abs/2508.05234
  • Tianyun Yang, Yunwen Li, Ziniu Li, Zhihang Lin, Ruoyu Sun, Tian Ding, 12 Aug 2025, Bridging Formal Language with Chain-of-Thought Reasoning to Geometry Problem Solving, https://arxiv.org/abs/2508.09099
  • Haiyun Guo, ZhiYan Hou, Yu Chen, Jinghan He, Yandu Sun, Yuzhe Zhou, Shujing Guo, Kuan Zhu, Jinqiao Wang, 31 Jul 2025, MLLM-CBench:A Comprehensive Benchmark for Continual Instruction Tuning of Multimodal LLMs with Chain-of-Thought Reasoning Analysis, https://arxiv.org/abs/2508.08275
  • Axel Delaval, Shujian Yang, Haicheng Wang, Han Qiu, Jialiang Lu, 15 Aug 2025, ToxiFrench: Benchmarking and Enhancing Language Models via CoT Fine-Tuning for French Toxicity Detection, https://arxiv.org/abs/2508.11281
  • Phuong Minh Nguyen, Tien Huu Dang, Naoya Inoue, 17 Aug 2025, Non-Iterative Symbolic-Aided Chain-of-Thought for Logical Reasoning, https://arxiv.org/abs/2508.12425
  • Zhifeng Kong, Arushi Goel, Joao Felipe Santos, Sreyan Ghosh, Rafael Valle, Wei Ping, Bryan Catanzaro, 15 Aug 2025, Audio Flamingo Sound-CoT Technical Report: Improving Chain-of-Thought Reasoning in Sound Understanding, https://arxiv.org/abs/2508.11818
  • Ruheng Wang, Hang Zhang, Trieu Nguyen, Shasha Feng, Hao-Wei Pang, Xiang Yu, Li Xiao, Peter Zhiping Zhang, 20 Aug 2025, PepThink-R1: LLM for Interpretable Cyclic Peptide Optimization with CoT SFT and Reinforcement Learning, https://arxiv.org/abs/2508.14765
  • Josh Barua, Seun Eisape, Kayo Yin, Alane Suhr, 20 Aug 2025, Long Chain-of-Thought Reasoning Across Languages, https://arxiv.org/abs/2508.14828
  • Wenqiao Zhu, Ji Liu, Rongjuncheng Zhang, Haipang Wu, Yulun Zhang, 21 Aug 2025, CARFT: Boosting LLM Reasoning via Contrastive Learning with Annotated Chain-of-Thought-based Reinforced Fine-Tuning, https://arxiv.org/abs/2508.15868
  • Jeremy Berman, Sep 17, 2025, How I got the highest score on ARC-AGI again swapping Python for English: Using Multi-Agent Collaboration with Evolutionary Test-Time Compute, https://jeremyberman.substack.com/p/how-i-got-the-highest-score-on-arc-agi-again (Generates multiple solutions then prunes them with "evolution" and iterates in multi-step inference.)
  • Zeyu Gan, Hao Yi, Yong Liu, 4 Sep 2025, CoT-Space: A Theoretical Framework for Internal Slow-Thinking via Reinforcement Learning, https://arxiv.org/abs/2509.04027
  • Sunguk Choi, Yonghoon Kwon, Heondeuk Lee, 26 Aug 2025, CAC-CoT: Connector-Aware Compact Chain-of-Thought for Efficient Reasoning Data Synthesis Across Dual-System Cognitive Tasks, https://arxiv.org/abs/2508.18743
  • Xinglong Yang, Quan Feng, Zhongying Pan, Xiang Chen, Yu Tian, Wentong Li, Shuofei Qiao, Yuxia Geng, Xingyu Zhao, Sheng-Jun Huang, 26 Aug 2025, Tailored Teaching with Balanced Difficulty: Elevating Reasoning in Multimodal Chain-of-Thought via Prompt Curriculum, https://arxiv.org/abs/2508.18673
  • Rushitha Santhoshi Mamidala, Anshuman Chhabra, Ankur Mali, 22 Aug 2025, Rethinking Reasoning in LLMs: Neuro-Symbolic Local RetoMaton Beyond ICL and CoT, https://arxiv.org/abs/2508.19271
  • Haimei Pan, Jiyun Zhang, Qinxi Wei, Xiongnan Jin, Chen Xinkai, Jie Cheng, 25 Aug 2025, Robotic Fire Risk Detection based on Dynamic Knowledge Graph Reasoning: An LLM-Driven Approach with Graph Chain-of-Thought, https://arxiv.org/abs/2509.00054
  • Sheldon Yu, Yuxin Xiong, Junda Wu, Xintong Li, Tong Yu, Xiang Chen, Ritwik Sinha, Jingbo Shang, Julian McAuley, 29 Aug 2025, Explainable Chain-of-Thought Reasoning: An Empirical Analysis on State-Aware Reasoning Dynamics, https://arxiv.org/abs/2509.00190
  • Hao Yang, Zhiyu Yang, Yunjie Zhang, Shanyi Zhu, Lin Yang, 1 Sep 2025, Rethinking the Chain-of-Thought: The Roles of In-Context Learning and Pre-trained Priors, https://arxiv.org/abs/2509.01236
  • Ziyun Zeng, Junhao Zhang, Wei Li, Mike Zheng Shou, 2 Sep 2025, Draw-In-Mind: Learning Precise Image Editing via Chain-of-Thought Imagination, https://arxiv.org/abs/2509.01986
  • Xingyue Huang, Rishabh, Gregor Franke, Ziyi Yang, Jiamu Bai, Weijie Bai, Jinhe Bi, Zifeng Ding, Yiqun Duan, Chengyu Fan, Wendong Fan, Xin Gao, Ruohao Guo, Yuan He, Zhuangzhuang He, Xianglong Hu, Neil Johnson, Bowen Li, Fangru Lin, Siyu Lin, Tong Liu, Yunpu Ma, Hao Shen, Hao Sun, Beibei Wang, Fangyijie Wang, Hao Wang, Haoran Wang, Yang Wang, Yifeng Wang, Zhaowei Wang, Ziyang Wang, Yifan Wu, Zikai Xiao, Chengxing Xie, Fan Yang, Junxiao Yang, Qianshuo Ye, Ziyu Ye, Guangtao Zeng, Yuwen Ebony Zhang, Zeyu Zhang, Zihao Zhu, Bernard Ghanem, Philip Torr, Guohao Li, 3 Sep 2025, Loong: Synthesize Long Chain-of-Thoughts at Scale through Verifiers, https://arxiv.org/abs/2509.03059
  • Haoyang He, Zihua Rong, Kun Ji, Chenyang Li, Qing Huang, Chong Xia, Lan Yang, Honggang Zhang, 7 Sep 2025, Rethinking Reasoning Quality in Large Language Models through Enhanced Chain-of-Thought via RL, https://arxiv.org/abs/2509.06024
  • Yihong Luo, Wenwu He, Zhuo-Xu Cui, Dong Liang, 8 Sep 2025, Teaching AI Stepwise Diagnostic Reasoning with Report-Guided Chain-of-Thought Learning, https://arxiv.org/abs/2509.06409
  • Vardhan Palod, Karthik Valmeekam, Kaya Stechly, Subbarao Kambhampati, 9 Sep 2025, Performative Thinking? The Brittle Correlation Between CoT Length and Problem Complexity, https://arxiv.org/abs/2509.07339
  • Sahiti Yerramilli, Nilay Pande, Rynaa Grover, Jayant Sravan Tamarapalli, 9 Sep 2025, GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning, https://arxiv.org/abs/2506.00785
  • Jie Xiao, Qianyi Huang, Xu Chen and Chen Tian, 11 Sep 2025, Understanding Large Language Models in Your Pockets: Performance Study on COTS Mobile Devices, https://arxiv.org/abs/2410.03613
  • Ryan Lucas, Kayhan Behdin, Zhipeng Wang, Qingquan Song, Shao Tang, Rahul Mazumder, 15 Sep 2025, Reasoning Models Can be Accurately Pruned Via Chain-of-Thought Reconstruction, https://arxiv.org/abs/2509.12464
  • Anmol Singhal Navya Singhal, 16 Sep 2025, Analogy-Driven Financial Chain-of-Thought (AD-FCoT): A Prompting Approach for Financial Sentiment Analysis, https://arxiv.org/abs/2509.12611
  • Jinghua Zhao, Hang Su, Lichun Fan, Zhenbo Luo, Jian Luan, Hui Wang, Haoqin Sun, Yong Qin, 14 Sep 2025, Omni-CLST: Error-aware Curriculum Learning with guided Selective chain-of-Thought for audio questuin answering, https://arxiv.org/abs/2509.12275
  • Heming Xia, Chak Tou Leong, Wenjie Wang, Yongqi Li, Wenjie Li, 16 Sep 2025, TokenSkip: Controllable Chain-of-Thought Compression in LLMs, https://arxiv.org/abs/2502.12067
  • Song Xu, Yilun Liu, Minggui He, Mingchen Dai, Ziang Chen, Chunguang Zhao, Jingzhou Du, Shimin Tao, Weibin Meng, Shenglin Zhang, Yongqian Sun, Boxing Chen, Daimeng Wei, 18 Sep 2025, RationAnomaly: Log Anomaly Detection with Rationality via Chain-of-Thought and Reinforcement Learning, https://arxiv.org/abs/2509.14693
  • Feiyang Li, Peng Fang, Zhan Shi, Arijit Khan, Fang Wang, Weihao Wang, Xin Zhang, Yongjian Cui, 10 Sep 2025, CoT-RAG: Integrating Chain of Thought and Retrieval-Augmented Generation to Enhance Reasoning in Large Language Models, https://arxiv.org/abs/2504.13534
  • Anand Swaroop, Akshat Nallani, Saksham Uboweja, Adiliia Uzdenova, Michael Nguyen, Kevin Zhu, Sunishchal Dev, Ashwinee Panda, Vasu Sharma, Maheep Chaudhary, 10 Sep 2025, FRIT: Using Causal Importance to Improve Chain-of-Thought Faithfulness, https://arxiv.org/abs/2509.13334
  • Pulkit Verma, Ngoc La, Anthony Favier, Swaroop Mishra, Julie A. Shah, 14 Sep 2025, Teaching LLMs to Plan: Logical Chain-of-Thought Instruction Tuning for Symbolic Planning, https://arxiv.org/abs/2509.13351
  • Kerui Huang, Shuhan Liu, Xing Hu, Tongtong Xu, Lingfeng Bao, Xin Xia, 17 Sep 2025, Reasoning Efficiently Through Adaptive Chain-of-Thought Compression: A Self-Optimizing Framework, https://arxiv.org/abs/2509.14093
  • Daniel Zhao, Abhilash Shankarampeta, Lanxiang Hu, Tajana Rosing, Hao Zhang, 2 Oct 2025, Towards Interpretable and Inference-Optimal COT Reasoning with Sparse Autoencoder-Guided Generation, https://arxiv.org/abs/2510.01528
  • Junyi Xie, Yuankun Jiao, Jina Kim, Yao-Yi Chiang, Lingyi Zhao, Khurram Shafique, 14 Oct 2025, HiCoTraj:Zero-Shot Demographic Reasoning via Hierarchical Chain-of-Thought Prompting from Trajectory, https://arxiv.org/abs/2510.12067
  • Zhongwei Yu, Wannian Xia, Xue Yan, Bo Xu, Haifeng Zhang, Yali Du, Jun Wang, 14 Oct 2025, Self-Verifying Reflection Helps Transformers with CoT Reasoning, https://arxiv.org/abs/2510.12157
  • Elija Perrier, 1 Oct 2025, Typed Chain-of-Thought: A Curry-Howard Framework for Verifying LLM Reasoning, https://arxiv.org/abs/2510.01069
  • Felix Parker, Nimeesha Chan, Chi Zhang, Kimia Ghobadi, 1 Oct 2025, Eliciting Chain-of-Thought Reasoning for Time Series Analysis using Reinforcement Learning, https://arxiv.org/abs/2510.01116
  • Eric Hanchen Jiang, Haozheng Luo, Shengyuan Pang, Xiaomin Li, Zhenting Qi, Hengli Li, Cheng-Fu Yang, Zongyu Lin, Xinfeng Li, Hao Xu, Kai-Wei Chang, Ying Nian Wu, 30 Sep 2025, Learning to Rank Chain-of-Thought: Using a Small Model, https://arxiv.org/abs/2505.14999
  • Xilin Wei, Xiaoran Liu, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Jiaqi Wang, Xipeng Qiu, Dahua Lin, 24 Sep 2025, SIM-CoT: Supervised Implicit Chain-of-Thought, https://arxiv.org/abs/2509.20317
  • Guohao Sun, Hang Hua, Jian Wang, Jiebo Luo, Sohail Dianat, Majid Rabbani, Raghuveer Rao, Zhiqiang Tao, 27 Oct 2025, Latent Chain-of-Thought for Visual Reasoning, https://arxiv.org/abs/2510.23925
  • Scott Emmons, Roland S. Zimmermann, David K. Elson, Rohin Shah, 28 Oct 2025, A Pragmatic Way to Measure Chain-of-Thought Monitorability, https://arxiv.org/abs/2510.23966
  • Bo Liu, Xiangyu Zhao, Along He, Yidi Chen, Huazhu Fu, Xiao-Ming Wu, 28 Oct 2025, GEMeX-RMCoT: An Enhanced Med-VQA Dataset for Region-Aware Multimodal Chain-of-Thought Reasoning, https://arxiv.org/abs/2506.17939
  • Artur Zolkowski, Wen Xing, David Lindner, Florian Tram\`er, Erik Jenner, 21 Oct 2025, Can Reasoning Models Obfuscate Reasoning? Stress-Testing Chain-of-Thought Monitorability, https://arxiv.org/abs/2510.19851
  • Wonje Jeung, Sangyeon Yoon, Minsuk Kahng, Albert No, 23 Oct 2025, SAFEPATH: Preventing Harmful Reasoning in Chain-of-Thought via Early Alignment, https://arxiv.org/abs/2505.14667
  • Xiongkun Linghu, Jiangyong Huang, Ziyu Zhu, Baoxiong Jia, Siyuan Huang, 19 Oct 2025, Eliciting Grounded Chain-of-Thought Reasoning in 3D Scenes, https://arxiv.org/abs/2510.16714
  • Yiqi Li, Yusheng Liao, Zhe Chen, Yanfeng Wang, Yu Wang, 20 Oct 2025, DICE: Structured Reasoning in LLMs through SLM-Guided Chain-of-Thought Correction, https://arxiv.org/abs/2510.09211
  • Yue Xin, Chen Shen, Shaotian Yan, Xiaosong Yuan, Yaoming Wang, Xiaofeng Zhang, Chenxi Huang, Jieping Ye, 20 Sep 2025, SalaMAnder: Shapley-based Mathematical Expression Attribution and Metric for Chain-of-Thought Reasoning, https://arxiv.org/abs/2509.16561
  • Haojun Yu, Youcheng Li, Zihan Niu, Nan Zhang, Xuantong Gong, Huan Li, Zhiying Zou, Haifeng Qi, Zhenxiao Cao, Zijie Lan, Xingjian Yuan, Jiating He, Haokai Zhang, Shengtao Zhang, Zicheng Wang, Dong Wang, Ziwei Zhao, Congying Chen, Yong Wang, Wangyan Qin, and Qingli Zhu, 21 Sep 2025, A Chain-of-thought Reasoning Breast Ultrasound Dataset Covering All Histopathology Categories, https://arxiv.org/abs/2509.17046
  • Khai Le-Duc, Duy M. H. Nguyen, Phuong T. H. Trinh, Tien-Phat Nguyen, Nghiem T. Diep, An Ngo, Tung Vu, Trinh Vuong, Anh-Tien Nguyen, Mau Nguyen, Van Trung Hoang, Khai-Nguyen Nguyen, Hy Nguyen, Chris Ngo, Anji Liu, Nhat Ho, Anne-Christin Hauschild, Khanh Xuan Nguyen, Thanh Nguyen-Tang, Pengtao Xie, Daniel Sonntag, James Zou, Mathias Niepert, Anh Totti Nguyen, 26 Oct 2025, S-Chain: Structured Visual Chain-of-Thought For Medicine, https://arxiv.org/abs/2510.22728
  • Afrina Tabassum, Bin Guo, Xiyao Ma, Hoda Eldardiry, Ismini Lourentzou, 25 Sep 2025, MMPlanner: Zero-Shot Multimodal Procedural Planning with Chain-of-Thought Object State Reasoning, https://arxiv.org/abs/2509.21662
  • Jianzhi Yan, Le Liu, Youcheng Pan, Shiwei Chen, Zike Yuan, Yang Xiang, Buzhou Tang, 26 Sep 2025, From Long to Lean: Performance-aware and Adaptive Chain-of-Thought Compression via Multi-round Refinement, https://arxiv.org/abs/2509.22144
  • Qihua Dong, Luis Figueroa, Handong Zhao, Kushal Kafle, Jason Kuen, Zhihong Ding, Scott Cohen, Yun Fu, 3 Oct 2025, CoT Referring: Improving Referring Expression Tasks with Grounded Reasoning, https://arxiv.org/abs/2510.06243
  • Hadi Mohammadi, Anastasia Giachanou, and Ayoub Bagheri, 8 Oct 2025, EvalMORAAL: Interpretable Chain-of-Thought and LLM-as-Judge Evaluation for Moral Alignment in Large Language Models, https://arxiv.org/abs/2510.05942
  • Antonio-Gabriel Chac\'on Menke, Phan Xuan Tan, Eiji Kamioka, 20 Oct 2025, Annotating the Chain-of-Thought: A Behavior-Labeled Dataset for AI Safety, https://arxiv.org/abs/2510.18154
  • Shuxin Lin, Dhaval Patel, Christodoulos Constantinides, 21 Oct 2025, Fine-Tuned Thoughts: Leveraging Chain-of-Thought Reasoning for Industrial Asset Health Monitoring, https://arxiv.org/abs/2510.18817
  • Yongda Yu, Guohao Shi, Xianwei Wu, Haochuan He, XueMing Gu, Qianqian Zhao, Kui Liu, Qiushi Wang, Zhao Tian, Haifeng Shen, Guoping Rong, 25 Sep 2025, Fine-Tuning LLMs to Analyze Multiple Dimensions of Code Review: A Maximum Entropy Regulated Long Chain-of-Thought Approach, https://arxiv.org/abs/2509.21170
  • Zihao Zhu, Xinyu Wu, Gehan Hu, Siwei Lyu, Ke Xu, Baoyuan Wu, 29 Sep 2025, AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models, https://arxiv.org/abs/2509.24269
  • Chunxue Xu, Yiwei Wang, Yujun Cai, Bryan Hooi, Songze Li, 28 Sep 2025, Visual CoT Makes VLMs Smarter but More Fragile, https://arxiv.org/abs/2509.23789
  • Jianzhi Yan, Le Liu, Youcheng Pan, Shiwei Chen, Yang Xiang, Buzhou Tang, 28 Sep 2025, Towards Efficient CoT Distillation: Self-Guided Rationale Selector for Better Performance with Fewer Rationales, https://arxiv.org/abs/2509.23574
  • Haonan Ge, Yiwei Wang, Kai-Wei Chang, Hang Wu, Yujun Cai, 28 Sep 2025, FrameMind: Frame-Interleaved Chain-of-Thought for Video Reasoning via Reinforcement Learning, https://arxiv.org/abs/2509.24008
  • Wenquan Lu, Yuechuan Yang, Kyle Lee, Yanshu Li, Enqi Liu, 28 Sep 2025, Latent Chain-of-Thought? Decoding the Depth-Recurrent Transformer, https://arxiv.org/abs/2507.02199
  • Kumar Manas, Stefan Zwicklbauer and Adrian Paschke, 27 Sep 2025, CoT-TL: Low-Resource Temporal Knowledge Representation of Planning Instructions Using Chain-of-Thought Reasoning, https://arxiv.org/abs/2410.16207
  • Zhipeng Yang, Junzhuo Li, Siyu Xia and Xuming Hu, 28 Sep 2025, Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs, https://arxiv.org/abs/2505.14530
  • Yuyao Zhang, Jinghao Li, Yu-Wing Tai, 17 Oct 2025, LayerCraft: Enhancing Text-to-Image Generation with CoT Reasoning and Layered Object Integration, https://arxiv.org/abs/2504.00010
  • Zhuohan Xie, Daniil Orel, Rushil Thareja, Dhruv Sahnan, Hachem Madmoun, Fan Zhang, Debopriyo Banerjee, Georgi Georgiev, Xueqing Peng, Lingfei Qian, Jimin Huang, Jinyan Su, Aaryamonvikram Singh, Rui Xing, Rania Elbadry, Chen Xu, Haonan Li, Fajri Koto, Ivan Koychev, Tanmoy Chakraborty, Yuxia Wang, Salem Lahlou, Veselin Stoyanov, Sophia Ananiadou, and Preslav Nakov, 17 Oct 2025, FinChain: A Symbolic Benchmark for Verifiable Chain-of-Thought Financial Reasoning, https://arxiv.org/abs/2506.02515
  • Xu Shen, Song Wang, Zhen Tan, Laura Yao, Xinyu Zhao, Kaidi Xu, Xin Wang, Tianlong Chen, 5 Oct 2025, FaithCoT-Bench: Benchmarking Instance-Level Faithfulness of Chain-of-Thought Reasoning, https://arxiv.org/abs/2510.04040
  • Soo Yong Kim, Suin Cho, Vincent-Daniel Yun, Gyeongyeon Hwang, 6 Oct 2025, MedCLM: Learning to Localize and Reason via a CoT-Curriculum in Medical Vision-Language Models, https://arxiv.org/abs/2510.04477
  • Imran Mansha, 6 Oct 2025, Resource-Efficient Fine-Tuning of LLaMA-3.2-3B for Medical Chain-of-Thought Reasoning, https://arxiv.org/abs/2510.05003
  • Yunfan Zhang, Kathleen McKeown, Smaranda Muresan, 5 Oct 2025, Exploring Chain-of-Thought Reasoning for Steerable Pluralistic Alignment, https://arxiv.org/abs/2510.04045
  • Zihao Xue and Zhen Bi and Long Ma and Zhenlin Hu and Yan Wang and Zhenfang Liu and Qing Sheng and Jie Xiao and Jungang Lou, 4 Oct 2025, Thought Purity: A Defense Framework For Chain-of-Thought Attack, https://arxiv.org/abs/2507.12314
  • Chengzhengxu Li, Xiaoming Liu, Zhaohan Zhang, Shaochu Zhang, Shengchao Liu, Guoxin Ma, Yu Lan, Chao Shen, 9 Oct 2025, Upfront Chain-of-Thought: A Cooperative Framework for Chain-of-Thought Compression, https://arxiv.org/abs/2510.08647
  • Zheng Zhao, Yeskendir Koishekenov, Xianjun Yang, Naila Murray, Nicola Cancedda, 10 Oct 2025, Verifying Chain-of-Thought Reasoning via Its Computational Graph, https://arxiv.org/abs/2510.09312
  • Ziyu Zheng, Yaming Yang, Ziyu Guan, Wei Zhao, Xinyan Huang and Weigang Lu, 10 Oct 2025, Beyond Single-Granularity Prompts: A Multi-Scale Chain-of-Thought Prompt Learning for Graph, https://arxiv.org/abs/2510.09394
  • Kevin Xu, Issei Sato, 24 Oct 2025, To CoT or To Loop? A Formal Comparison Between Chain-of-Thought and Looped Transformers, https://arxiv.org/abs/2505.19245
  • Daeun Lee, Jaehong Yoon, Jaemin Cho, Mohit Bansal, 24 Oct 2025, Video-Skill-CoT: Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning, https://arxiv.org/abs/2506.03525
  • Chengqi Duan, Kaiyue Sun, Rongyao Fang, Manyuan Zhang, Yan Feng, Ying Luo, Yufang Liu, Ke Wang, Peng Pei, Xunliang Cai, Hongsheng Li, Yi Ma, Xihui Liu, 13 Oct 2025, CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven Images, https://arxiv.org/abs/2510.11718
  • Thang Nguyen, Peter Chin, Yu-Wing Tai, 11 Oct 2025, MA-RAG: Multi-Agent Retrieval-Augmented Generation via Collaborative Chain-of-Thought Reasoning, https://arxiv.org/abs/2505.20096
  • Xiang Cheng, Chengyan Pan, Minjun Zhao, Deyang Li, Fangchao Liu, Xinyu Zhang, Xiao Zhang, Yong Liu, 13 Oct 2025, Revisiting Chain-of-Thought Prompting: Zero-shot Can Be Stronger than Few-shot, https://arxiv.org/abs/2506.14641
  • Yu Ti Huang, 20 Sep 2025, Conversational Orientation Reasoning: Egocentric-to-Allocentric Navigation with Multimodal Chain-of-Thought, https://arxiv.org/abs/2509.18200
  • Yunzhen Feng, Julia Kempe, Cheng Zhang, Parag Jain, Anthony Hartshorn, 23 Sep 2025, What Characterizes Effective Reasoning? Revisiting Length, Review, and Structure of CoT, https://arxiv.org/abs/2509.19284
  • Julian Schulz, 22 Oct 2025, A Concrete Roadmap towards Safety Cases based on Chain-of-Thought Monitoring, https://arxiv.org/abs/2510.19476
  • Kevin Xu and Issei Sato, 25 Sep 2025, A Formal Comparison Between Chain-of-Thought and Latent Thought, https://arxiv.org/abs/2509.25239
  • Raphael Schumann, Stefan Riezler, 30 Sep 2025, Boosting Process-Correct CoT Reasoning by Modeling Solvability of Multiple-Choice QA, https://arxiv.org/abs/2509.25941
  • Hongyu Chen, Guangrun Wang, 26 Sep 2025, UML-CoT: Structured Reasoning and Planning with Unified Modeling Language for Robotic Room Cleaning, https://arxiv.org/abs/2509.22628
  • Kaiwen Wang, Jin Peng Zhou, Jonathan Chang, Zhaolin Gao, Nathan Kallus, Kiant\'e Brantley, Wen Sun, 30 Sep 2025, Value-Guided Search for Efficient Chain-of-Thought Reasoning, https://arxiv.org/abs/2505.17373
  • Zeqi Gu, Markos Georgopoulos, Xiaoliang Dai, Marjan Ghazvininejad, Chu Wang, Felix Juefei-Xu, Kunpeng Li, Yujun Shi, Zecheng He, Zijian He, Jiawei Zhou, Abe Davis, Jialiang Wang, 7 Oct 2025, Improving Chain-of-Thought Efficiency for Autoregressive Image Generation, https://arxiv.org/abs/2510.05593
  • Haoran Zhang, Shuanghao Bai, Wanqi Zhou, Yuedi Zhang, Qi Zhang, Pengxiang Ding, Cheng Chi, Donglin Wang, Badong Chen, 7 Oct 2025, VCoT-Grasp: Grasp Foundation Models with Visual Chain-of-Thought Reasoning for Language-driven Grasp Generation, https://arxiv.org/abs/2510.05827

Tree-of-Thought (ToT)

Research papers on Tree-of-Thought (ToT) prompting:

Skeleton-of-Thought

Skeleton-of-thought is a technique that aims not only to improve accuracy, but also to improve speed and cost efficiency of inference by splitting a single prompt into multiple, smaller sub-prompts. These can be executed in parallel, to reduce overall latency.

The basic speedup works like this:

  • Generate an outline quickly (short LLM answer)
  • For each outline point, generate a brief answer (multiple focused LLM queries to compute short answers in parallel)
  • Combine them into a final, longer answer (possibly with an LLM, but this will be a long text, so heuristic packing/merging of sub-answers is faster)

Research papers on skeleton-of-thought:

  • L. Zheng, L. Yin, Z. Xie, J. Huang, C. Sun, C. H. Yu, S. Cao, C. Kozyrakis, I. Stoica, J. E. Gonzalez et al., Dec 2023, Efficiently programming large language models using SGLang, arXiv preprint arXiv:2312.07104, 2023, https://arxiv.org/abs/2312.07104 (Uses a radix attention method, a trie or prefix tree, for KV caching.)
  • Xuefei Ning , Zinan Lin , November 17, 2023 Skeleton-of-Thought: Parallel decoding speeds up and improves LLM output, Microsoft Research Blog, https://www.microsoft.com/en-us/research/blog/skeleton-of-thought-parallel-decoding-speeds-up-and-improves-llm-output/ Code: https://github.com/imagination-research/sot/
  • S. Jin, Y. Wu, H. Zheng, Q. Zhang, M. Lentz, Z. M. Mao, A. Prakash, F. Qian, and D. Zhuo, “Adaptive skeleton graph decoding,” arXiv preprint arXiv:2402.12280, 2024. https://arxiv.org/abs/2402.12280
  • M. Liu, A. Zeng, B. Wang, P. Zhang, J. Tang, and Y. Dong, “Apar: Llms can do auto-parallel auto-regressive decoding,” arXiv preprint arXiv:2401.06761, 2024. https://arxiv.org/abs/2401.06761
  • 8 Jun 2024 (v2), A Survey on Efficient Inference for Large Language Models, Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Jiaming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, Shengen Yan, Guohao Dai, Xiao-Ping Zhang, Yuhan Dong, Yu Wang, https://arxiv.org/abs/2404.14294
  • Mahsa Khoshnoodi, Vinija Jain, Mingye Gao, Malavika Srikanth, Aman Chadha, 24 May 2024 (v2), A Comprehensive Survey of Accelerated Generation Techniques in Large Language Models, https://arxiv.org/abs/2405.13019
  • Steven Kolawole, KeshavSanthanam, Virginia Smith, Pratiksha Thaker, Nov 2024, Extracting Parallelism from LargeLanguageModelQueries, https://openreview.net/pdf?id=CZHt9kLS5S
  • Huiyou Zhan, Xuan Zhang, Haisheng Tan, Han Tian, Dongping Yong, Junyang Zhang, Xiang-Yang Li, 16 Jan 2025, PICE: A Semantic-Driven Progressive Inference System for LLM Serving in Cloud-Edge Networks, https://arxiv.org/abs/2501.09367 (Generate an outline in the cloud that is filled in by edge models, which is similar to Skeleton-of-Thought.)
  • Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang, May 2024, Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation, ICLR 2024, https://www.microsoft.com/en-us/research/publication/skeleton-of-thought-large-language-models-can-do-parallel-decoding/ https://neurips2023-enlsp.github.io/papers/paper_33.pdf Code: https://github.com/imagination-research/sot/
  • Ruibin Xiong, Yimeng Chen, Dmitrii Khizbullin, Jürgen Schmidhuber, 11 Mar 2025, Beyond Outlining: Heterogeneous Recursive Planning for Adaptive Long-form Writing with Language Models, https://arxiv.org/abs/2503.08275
  • Yijiong Yu, 26 Mar 2025, Accelerate Parallelizable Reasoning via Parallel Decoding within One Sequence, https://arxiv.org/abs/2503.20533 https://github.com/yuyijiong/parallel-decoding-in-one-sequence
  • Siqi Fan, Peng Han, Shuo Shang, Yequan Wang, Aixin Sun, 28 May 2025, CoThink: Token-Efficient Reasoning via Instruct Models Guiding Reasoning Models, https://arxiv.org/abs/2505.22017 (Generate an outline before reasoning.)
  • Ali Ismail-Fawaz and Maxime Devanne and Stefano Berretti and Jonathan Weber and Germain Forestier, 28 Jul 2025, Deep Learning for Skeleton Based Human Motion Rehabilitation Assessment: A Benchmark, https://arxiv.org/abs/2507.21018
  • Tiantian Liu, Xiao Li, Huan Li, Hua Lu, Christian S. Jensen, Jianliang Xu, 4 Aug 2025, Skeleton-Guided Learning for Shortest Path Search, https://arxiv.org/abs/2508.02270
  • Youwei Zhou and Tianyang Xu and Cong Wu and Xiaojun Wu and Josef Kittler, 4 Aug 2025, Adaptive Hyper-Graph Convolution Network for Skeleton-based Human Action Recognition with Virtual Connections, https://arxiv.org/abs/2411.14796
  • Devansh Arora, Nitin Kumar, Sukrit Gupta, 15 Aug 2025, Does the Skeleton-Recall Loss Really Work?, https://arxiv.org/abs/2508.11374
  • Maolin Sun, Yibiao Yang, Yuming Zhou, 28 Aug 2025, Boosting Skeleton-Driven SMT Solver Fuzzing by Leveraging LLM to Produce Formula Generators, https://arxiv.org/abs/2508.20340
  • Dongjingdin Liu, Pengpeng Chen, Miao Yao, Yijing Lu, Zijie Cai, Yuxin Tian, 12 Sep 2025, TSGCNeXt: Dynamic-Static Multi-Graph Convolution for Efficient Skeleton-Based Action Recognition with Long-term Learning Potential, https://arxiv.org/abs/2304.11631
  • Sanjeda Akter, Ibne Farabi Shihab, Anuj Sharma, 16 Sep 2025, Selective Risk Certification for LLM Outputs via Information-Lift Statistics: PAC-Bayes, Robustness, and Skeleton Design, https://arxiv.org/abs/2509.12527
  • Bo Wang, Tianyu Li, Ruishi Li, Umang Mathur, Prateek Saxena, 10 Apr 2025, Program Skeletons for Automated Program Translation, https://arxiv.org/abs/2504.07483
  • Feng Ding, Haisheng Fu, Soroush Oraki, Jie Liang, 18 Sep 2025, LSTC-MDA: A Unified Framework for Long-Short Term Temporal Convolution and Mixed Data Augmentation in Skeleton-Based Action Recognition, https://arxiv.org/abs/2509.14619
  • Liangjin Liu, Haoyang Zheng, Zhengzhong Zhu, Pei Zhou, 18 Sep 2025, Skeleton-based sign language recognition using a dual-stream spatio-temporal dynamic graph convolutional network, https://arxiv.org/abs/2509.08661
  • Wen-Bo Xie, Xun Fu, Bin Chen, Yan-Li Lee, Tao Deng, Tian Zou, Xin Wang, Zhen Liu, Jaideep Srivastavad, 10 Sep 2025, Data Skeleton Learning: Scalable Active Clustering with Sparse Graph Structures, https://arxiv.org/abs/2509.08530
  • Yewang Chen and Junfeng Li and Shuyin Xia and Qinghong Lai and Xinbo Gao and Guoyin Wang and Dongdong Cheng and Yi Liu and Yi Wang, 28 Sep 2025, GBSK: Skeleton Clustering via Granular-ball Computing and Multi-Sampling for Large-Scale Data, https://arxiv.org/abs/2509.23742
  • Yongqiang Wang, Weigang Li, Wenping Liu, Zhiqiang Tian, Jinling Li, 29 Sep 2025, Skeleton-based Robust Registration Framework for Corrupted 3D Point Clouds, https://arxiv.org/abs/2509.24273
  • Ziying Zhang, Yaqing Wang, Quanming Yao, 5 Oct 2025, Searching Meta Reasoning Skeleton to Guide LLM Reasoning, https://arxiv.org/abs/2510.04116
  • Suming Qiu, Jing Li, Zhicheng Zhou, Junjie Huang, Linyuan Qiu, Zhijie Sun, 10 Oct 2025, HES-SQL: Hybrid Reasoning for Efficient Text-to-SQL with Structural Skeleton Guidance, https://arxiv.org/abs/2510.08896
  • A. Candito (1), A. Dragan (1,2), R. Holbrey (1), A. Ribeiro (2), R. Donners (3), C. Messiou (1,2), N. Tunariu (1,2), D.-M. Koh (1,2), and M. D. Blackledge (1) ((1) The Institute of Cancer Research, London, United Kingdom (2) The Royal Marsden NHS Foundation Trust, London, United Kingdom (3) University Hospital Basel, Basel, Switzerland), 7 Oct 2025, A weakly-supervised deep learning model for fast localisation and delineation of the skeleton, internal organs, and spinal canal on Whole-Body Diffusion-Weighted MRI (WB-DWI), https://arxiv.org/abs/2503.20722

Prompt Optimization

Prompt optimization is the technique of improving the results from prompt engineering. In a sense, this covers the whole discipline, but there are specific technical approaches to optimizing prompts. One subarea is automatic prompt optimization, also called "programmatic prompting," which uses an LLM to automatically tweak prompt wording, but there are also many manual approaches.

Research papers on prompt optimization include:

Programmatic Prompt Engineering

Programmatic prompting or "auto prompting" is the use of software automation, such as an extra LLM step, to auto-create better prompts for users based on their original query text. The results should be better prompt structures and better answers.

Research on programmatic prompt engineering:

Advanced Prompt Engineering Techniques

Research papers on advanced prompting methods:

Prompt Efficiency Optimizations

There are several types of speed optimizations of LLM inference that involve prompt tokens. The main ideas are:

  • Prompt compression — fewer tokens to process.
  • Prompt caching — storing and reusing the outputs or KV cache data.
  • Parallel processing — e.g., skeleton-of-thought prompting.

Prompt compression research. Various prompt compression techniques include:

Prompt caching research. The various types of caching may include:

General Research on Prompt Engineering

AI Books from Aussie AI



The Sweetest Lesson: Your Brain Versus AI The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
  • Your brain is 50 times bigger than the best AI engines.
  • Truly intelligent AI will require more compute!
  • Another case of the bitter lesson?
  • Maybe it's the opposite of that: the sweetest lesson.

Get your copy from Amazon: The Sweetest Lesson



RAG Optimization RAG Optimization: Accurate and Efficient LLM Applications: new book on RAG architectures:
  • Smarter RAG
  • Faster RAG
  • Cheaper RAG
  • Agentic RAG
  • RAG reasoning

Get your copy from Amazon: RAG Optimization



Generative AI in C++ Generative AI Applications book:
  • Deciding on your AI project
  • Planning for success and safety
  • Designs and LLM architectures
  • Expediting development
  • Implementation and deployment

Get your copy from Amazon: Generative AI Applications



Generative AI in C++ Generative AI programming book:
  • Generative AI coding in C++
  • Transformer engine speedups
  • LLM models
  • Phone and desktop AI
  • Code examples
  • Research citations

Get your copy from Amazon: Generative AI in C++



CUDA C++ Optimization CUDA C++ Optimization book:
  • Faster CUDA C++ kernels
  • Optimization tools & techniques
  • Compute optimization
  • Memory optimization

Get your copy from Amazon: CUDA C++ Optimization



CUDA C++ Optimization CUDA C++ Debugging book:
  • Debugging CUDA C++ kernels
  • Tools & techniques
  • Self-testing & reliability
  • Common GPU kernel bugs

Get your copy from Amazon: CUDA C++ Debugging

More AI Research

Read more about: