Aussie AI

In-Context Learning (ICL)

  • Last Updated 17 November, 2025
  • by David Spuler, Ph.D.

What is In-Context Learning (ICL)?

In-Context Learning (ICL) is the general idea of using knowledge from the LLM's input prompt in answering a question. This doesn't sound very revolutionary these days, since we're all familiar with RAG architectures, but there was a time when it was a novel concept. When researchers put all their energy into pre-training the parametric knowledge of a model, it wasn't immediately obvious that it could be "augmented" with extra facts, just by putting them into the input string.

After all, the RAG technique itself was once an unproven research paper. The authors of the first RAG paper have gone on the record saying that, if they'd known how popular it would become, they would have chosen a better name!

Augmentation of knowledge via extra context tokens in the middle of the prompt is no longer new bananas. ICL is the underpinning idea behind various LLM prompt augmentation methods:

Research on ICL

Research papers on ICL include:

  • João Monteiro, Étienne Marcotte, Pierre-André Noël, Valentina Zantedeschi, David Vázquez, Nicolas Chapados, Christopher Pal, Perouz Taslakian, 23 Apr 2024, XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference, https://arxiv.org/abs/2404.15420
  • Andrea Matarazzo, Riccardo Torlone, 3 Jan 2025, A Survey on Large Language Models with some Insights on their Capabilities and Limitations, https://arxiv.org/abs/2501.04040 (Broad survey with many LLM topics covered from history to architectures to optimizations.)
  • Tong Xiao, Jingbo Zhu, 16 Jan 2025, Foundations of Large Language Models, https://arxiv.org/abs/2501.09223 (Huge 230 page paper on many topics such as training, prompting, alignment, and long context.)
  • Son, M., Won, Y.-J., & Lee, S. (2025). Optimizing Large Language Models: A Deep Dive into Effective Prompt Engineering Techniques. Applied Sciences, 15(3), 1430. https://doi.org/10.3390/app15031430 https://www.mdpi.com/2076-3417/15/3/1430
  • Fabio Matricardi, Jan 18, 2025, How a Small Language Model Can Achieve 100% Accuracy: In Context Learning is Underrated — ICL is the secret key to reach performance boosting — teach to an AI how to say “I don’t know” — part 2, https://generativeai.pub/how-a-small-language-model-can-achieve-100-accuracy-323a789ffa83
  • Xiaoran Liu, Ruixiao Li, Mianqiu Huang, Zhigeng Liu, Yuerong Song, Qipeng Guo, Siyang He, Qiqi Wang, Linlin Li, Qun Liu, Yaqian Zhou, Xuanjing Huang, Xipeng Qiu, 24 Feb 2025, Thus Spake Long-Context Large Language Model, https://arxiv.org/abs/2502.17129 (Impressive survey of many techniques to improve efficiency and accuracy of long context processing in both inference and training, covering text, video and multimodal models.)
  • Benoit Dherin, Michael Munn, Hanna Mazzawi, Michael Wunder, Javier Gonzalvo, 21 Jul 2025, Learning without training: The implicit dynamics of in-context learning, https://arxiv.org/abs/2507.16003
  • Jathin Korrapati, Patrick Mendoza, Aditya Tomar, Abein Abraham, 13 Aug 2025, Can Transformers Break Encryption Schemes via In-Context Learning?, https://arxiv.org/abs/2508.10235
  • Shugang Hao, Hongbo Li and Lingjie Duan, 14 Aug 2025, To Theoretically Understand Transformer-Based In-Context Learning for Optimizing CSMA, https://arxiv.org/abs/2508.09146
  • Shahriar Golchin, Yanfei Chen, Rujun Han, Manan Gandhi, Tianli Yu, Swaroop Mishra, Mihai Surdeanu, Rishabh Agarwal, Chen-Yu Lee, Tomas Pfister, 22 Jul 2025, Towards Compute-Optimal Many-Shot In-Context Learning, https://arxiv.org/abs/2507.16217
  • Jihyung Lee, Jin-Seop Lee, Jaehoon Lee, YunSeok Choi, Jee-Hyong Lee, 22 Jul 2025, DCG-SQL: Enhancing In-Context Learning for Text-to-SQL with Deep Contextual Schema Link Graph, https://arxiv.org/abs/2505.19956
  • Yongyi Yang, Hidenori Tanaka, Wei Hu, 17 Jul 2025, Provable Low-Frequency Bias of In-Context Learning of Representations, https://arxiv.org/abs/2507.13540
  • Erfan Pirmorad, 20 Jul 2025, Exploring the In-Context Learning Capabilities of LLMs for Money Laundering Detection in Financial Graphs, https://arxiv.org/abs/2507.14785
  • Xing Shen, Justin Szeto, Mingyang Li, Hengguan Huang, Tal Arbel, 29 Jun 2025, Exposing and Mitigating Calibration Biases and Demographic Unfairness in MLLM Few-Shot In-Context Learning for Medical Image Classification, https://arxiv.org/abs/2506.23298
  • Shuo Chen, Jianzhe Liu, Zhen Han, Yan Xia, Daniel Cremers, Philip Torr, Volker Tresp, Jindong Gu, 21 Jul 2025, True Multimodal In-Context Learning Needs Attention to the Visual Context, https://arxiv.org/abs/2507.15807
  • Yijing Lin, Mengqi Huang, Shuhan Zhuang, Zhendong Mao, 20 Jul 2025, RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models, https://arxiv.org/abs/2503.10406
  • Hongbo Li, Lingjie Duan and Yingbin Liang, 28 Jul 2025, Provable In-Context Learning of Nonlinear Regression with Transformers, https://arxiv.org/abs/2507.20443
  • Kacper Kadziolka and Saber Salehkaleybar, 31 Jul 2025, Causal Reasoning in Pieces: Modular In-Context Learning for Causal Discovery, https://arxiv.org/abs/2507.23488
  • Kwesi Cobbina and Tianyi Zhou, 30 Jul 2025, Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning, https://arxiv.org/abs/2507.22887
  • Huiyi Chen, Jiawei Peng, Kaihua Tang, Xin Geng, Xu Yang, 30 Jul 2025, Enhancing Multimodal In-Context Learning for Image Classification through Coreset Optimization, https://arxiv.org/abs/2504.14200
  • Patrik Kenfack, Samira Ebrahimi Kahou, Ulrich A\"ivodji, 1 Aug 2025, Towards Fair In-Context Learning with Tabular Foundation Models, https://arxiv.org/abs/2505.09503
  • Thomas F Burns, Tomoki Fukai, Christopher J Earls, 4 Aug 2025, Associative memory inspires improvements for in-context learning using a novel attention residual stream architecture, https://arxiv.org/abs/2412.15113
  • Ruixing Zhang, Bo Wang, Tongyu Zhu, Leilei Sun, Weifeng Lv, 5 Aug 2025, Urban In-Context Learning: Bridging Pretraining and Inference through Masked Diffusion for Urban Profiling, https://arxiv.org/abs/2508.03042
  • Simon Lepage, Jeremie Mary and David Picard, 5 Aug 2025, Markov Chain Estimation with In-Context Learning, https://arxiv.org/abs/2508.03934
  • Usman Anwar, Johannes Von Oswald, Louis Kirsch, David Krueger, Spencer Frei, 5 Aug 2025, Understanding In-Context Learning of Linear Models in Transformers Through an Adversarial Lens, https://arxiv.org/abs/2411.05189
  • Yanshu Li, Yi Cao, Hongyang He, Qisen Cheng, Xiang Fu, Xi Xiao, Tianyang Wang, Ruixiang Tang, 8 Aug 2025, M$^2$IV: Towards Efficient and Fine-grained Multimodal In-Context Learning via Representation Engineering, https://arxiv.org/abs/2504.04633
  • Hengzhe Zhang, Qi Chen, Bing Xue, Wolfgang Banzhaf, Mengjie Zhang, 8 Aug 2025, LLM-Meta-SR: In-Context Learning for Evolving Selection Operators in Symbolic Regression, https://arxiv.org/abs/2505.18602
  • Chenrui Liu, Falong Tan, Chuanlong Xie, Yicheng Zeng and Lixing Zhu, 12 Aug 2025, In-Context Learning as Nonparametric Conditional Probability Estimation: Risk Bounds and Optimality, https://arxiv.org/abs/2508.08673
  • Jaeyeon Kim, Sehyun Kwon, Joo Young Choi, Jongho Park, Jaewoong Cho, Jason D. Lee, Ernest K. Ryu, 12 Aug 2025, Task Diversity Shortens the ICL Plateau, https://arxiv.org/abs/2410.05448
  • Trevine Oorloff, Vishwanath Sindagi, Wele Gedara Chaminda Bandara, Ali Shafahi, Amin Ghiasi, Charan Prakash, Reza Ardekani, 13 Aug 2025, Stable Diffusion Models are Secretly Good at Visual In-Context Learning, https://arxiv.org/abs/2508.09949
  • Dake Bu, Wei Huang, Andi Han, Atsushi Nitanda, Taiji Suzuki, Qingfu Zhang, Hau-San Wong, 13 Aug 2025, Provably Transformers Harness Multi-Concept Word Semantics for Efficient In-Context Learning, https://arxiv.org/abs/2411.02199
  • Chuanliu Fan, Zicheng Ma, Jun Gao, Nan Yu, Jun Zhang, Ziqiang Cao, Yi Qin Gao, Guohong Fu, 17 Aug 2025, ProtTeX-CC: Activating In-Context Learning in Protein LLM via Two-Stage Instruction Compression, https://arxiv.org/abs/2508.12212
  • Chase Goddard, Lindsay M. Smith, Vudtiwat Ngampruetikorn, David J. Schwab, 18 Aug 2025, When can in-context learning generalize out of task distribution?, https://arxiv.org/abs/2506.05574
  • Aleksandra Bakalova, Yana Veitsman, Xinting Huang, Michael Hahn, 22 Aug 2025, Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B, https://arxiv.org/abs/2504.00132
  • Fernando Martinez-Lopez, Tao Li, Yingdong Lu, Juntao Chen, 8 Aug 2025, In-Context Reinforcement Learning via Communicative World Models, https://arxiv.org/abs/2508.06659
  • Aditya Varre, Gizem Y\"uce, Nicolas Flammarion, 18 Aug 2025, Learning In-context $\pmb{n}$-grams with Transformers: Sub-$\pmb{n}$-grams Are Near-stationary Points, https://arxiv.org/abs/2508.12837
  • Quan Nguyen and Thanh Nguyen-Tang, 20 Aug 2025, One-Layer Transformers are Provably Optimal for In-context Reasoning and Distributional Association Learning in Next-Token Prediction Tasks, https://arxiv.org/abs/2505.15009
  • Wentao Wang, Guangyuan Jiang, Tal Linzen, Brenden M. Lake, 4 Sep 2025, Rapid Word Learning Through Meta In-Context Learning, https://arxiv.org/abs/2502.14791
  • Jacob Russin, Ellie Pavlick, Michael J. Frank, 4 Sep 2025, The dynamic interplay between in-context and in-weight learning in humans and neural networks, https://arxiv.org/abs/2402.08674
  • Ziniu Zhang, Zhenshuo Zhang, Dongyue Li, Lu Wang, Jennifer Dy, Hongyang R. Zhang, 27 Aug 2025, Linear-Time Demonstration Selection for In-Context Learning via Gradient Estimation, https://arxiv.org/abs/2508.19999
  • Rushitha Santhoshi Mamidala, Anshuman Chhabra, Ankur Mali, 22 Aug 2025, Rethinking Reasoning in LLMs: Neuro-Symbolic Local RetoMaton Beyond ICL and CoT, https://arxiv.org/abs/2508.19271
  • Ruobing Wang, Qiaoyu Tan, Yili Wang, Ying Wang, Xin Wang, 27 Aug 2025, CrystalICL: Enabling In-Context Learning for Crystal Generation, https://arxiv.org/abs/2508.20143
  • Souradeep Nanda, Anay Majee, Rishabh Iyer, 28 Aug 2025, InSQuAD: In-Context Learning for Efficient Retrieval via Submodular Mutual Information to Enforce Quality and Diversity, https://arxiv.org/abs/2508.21003
  • Gen Li, Yuchen Jiao, Yu Huang, Yuting Wei, Yuxin Chen, 28 Aug 2025, Transformers Meet In-Context Learning: A Universal Approximation Theory, https://arxiv.org/abs/2506.05200
  • Renat Sergazinov, Shao-An Yin, 30 Aug 2025, Chunked TabPFN: Exact Training-Free In-Context Learning for Long-Context Tabular Data, https://arxiv.org/abs/2509.00326
  • Stefano Fioravanti, Matteo Zavatteri, Roberto Confalonieri, Kamyar Zeinalipour, Paolo Frazzetto, Alessandro Sperduti, Nicol\`o Navarin, 1 Sep 2025, Iterative In-Context Learning to Enhance LLMs Abstract Reasoning: The Case-Study of Algebraic Tasks, https://arxiv.org/abs/2509.01267
  • Sachin Goyal, David Lopez-Paz, Kartik Ahuja, 1 Sep 2025, Distilled Pretraining: A modern lens of Data, In-Context Learning and Test-Time Scaling, https://arxiv.org/abs/2509.01649
  • Weicao Deng, Sangwoo Park, Min Li, and Osvaldo Simeone, 1 Sep 2025, Optimizing In-Context Learning for Efficient Full Conformal Prediction, https://arxiv.org/abs/2509.01840
  • Hao Yang, Zhiyu Yang, Yunjie Zhang, Shanyi Zhu, Lin Yang, 1 Sep 2025, Rethinking the Chain-of-Thought: The Roles of In-Context Learning and Pre-trained Priors, https://arxiv.org/abs/2509.01236
  • I. Shavindra Jayasekera, Jacob Si, Wenlong Chen, Filippo Valdettaro, A. Aldo Faisal, Yingzhen Li, 2 Sep 2025, Variational Uncertainty Decomposition for In-Context Learning, https://arxiv.org/abs/2509.02327
  • Teeradaj Racharak, Chaiyong Ragkhitwetsagul, Chommakorn Sontesadisai, Thanwadee Sunetnanta, 8 Sep 2025, Test It Before You Trust It: Applying Software Testing for Trustworthy In-context Learning, https://arxiv.org/abs/2504.18827
  • Michele Joshua Maggini, Dhia Merzougui, Rabiraj Bandyopadhyay, Ga\"el Dias, Fabrice Maurel, Pablo Gamallo, 9 Sep 2025, Are LLMs Enough for Hyperpartisan, Fake, Polarized and Harmful Content Detection? Evaluating In-Context Learning vs. Fine-Tuning, https://arxiv.org/abs/2509.07768
  • Adrian de Wynter, 12 Sep 2025, Is In-Context Learning Learning?, https://arxiv.org/abs/2509.10414
  • Haoyu Dong, Pengkun Zhang, Mingzhe Lu, Yanzhen Shen, Guolin Ke, 12 Sep 2025, MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining, https://arxiv.org/abs/2509.06806
  • Daniil Ignatev, Nan Li, Hugh Mee Wong, Anh Dang, Shane Kaszefski Yaschuk, 11 Sep 2025, DeMeVa at LeWiDi-2025: Modeling Perspectives with In-Context Learning and Label Distribution Learning, https://arxiv.org/abs/2509.09524
  • J\^onata Tyska Carvalho and Stefano Nolfi, 11 Sep 2025, LLMs for sensory-motor control: Combining in-context and iterative learning, https://arxiv.org/abs/2506.04867
  • Vaibhav Singh, Soumya Suvra Ghosal, Kapu Nirmal Joshua, Soumyabrata Pal, Sayak Ray Chowdhury, 19 Sep 2025, KITE: Kernelized and Information Theoretic Exemplars for In-Context Learning, https://arxiv.org/abs/2509.15676
  • Josip Juki\'c, Jan \v{S}najder, 18 Sep 2025, Disentangling Latent Shifts of In-Context Learning with Weak Supervision, https://arxiv.org/abs/2410.01508
  • Seongho Joo, Hyukhun Koh, Kyomin Jung, 13 Sep 2025, Public Data Assisted Differentially Private In-Context Learning, https://arxiv.org/abs/2509.10932
  • Chi Han, Ziqi Wang, Han Zhao, Heng Ji, 12 Sep 2025, Understanding Emergent In-Context Learning from a Kernel Regression Perspective, https://arxiv.org/abs/2305.12766
  • Kazumi Kasaura, Naoto Onda, Yuta Oriike, Masaya Taniguchi, Akiyoshi Sannai, Sho Sonoda, 16 Sep 2025, Discovering New Theorems via LLMs with In-Context Proof Learning in Lean, https://arxiv.org/abs/2509.14274
  • Samet Demir, Zafer Dogan, 18 Sep 2025, Asymptotic Study of In-context Learning with Random Transformers through Equivalent Models, https://arxiv.org/abs/2509.15152
  • Kishan Padayachy, Ronald Richman, Salvatore Scognamiglio, Mario V. W\"uthrich, 9 Sep 2025, In-Context Learning Enhanced Credibility Transformer, https://arxiv.org/abs/2509.08122
  • Bishnu Bhusal, Manoj Acharya, Ramneet Kaur, Colin Samplawski, Anirban Roy, Adam D. Cobb, Rohit Chadha, Susmit Jha, 17 Sep 2025, Privacy-Aware In-Context Learning for Large Language Models, https://arxiv.org/abs/2509.13625
  • Haolong Zheng, Yekaterina Yegorova, Mark Hasegawa-Johnson, 16 Sep 2025, TICL: Text-Embedding KNN For Speech In-Context Learning Unlocks Speech Recognition Abilities of Large Multimodal Models, https://arxiv.org/abs/2509.13395
  • Hadi Askari, Shivanshu Gupta, Terry Tong, Fei Wang, Anshuman Chhabra, Muhao Chen, 2 Oct 2025, Unraveling Indirect In-Context Learning Using Influence Functions, https://arxiv.org/abs/2501.01473
  • Yue M. Lu, Mary I. Letey, Jacob A. Zavatone-Veth, Anindita Maiti, Cengiz Pehlevan, 1 Oct 2025, Asymptotic theory of in-context learning by linear attention, https://arxiv.org/abs/2405.11751
  • Junsoo Oh, Wei Huang, Taiji Suzuki, 14 Oct 2025, Mamaba Can Learn Low-Dimensional Targets In-Context via Test-Time Feature Learning, https://arxiv.org/abs/2510.12026
  • Yuta Kobayashi, Zilin Jing, Jiayu Yao, Hongseok Namkoong, Shalmali Joshi, 14 Oct 2025, Learning-To-Measure: In-context Active Feature Acquisition, https://arxiv.org/abs/2510.12624
  • Chenxu Wang, Hao Li, Yiqun Zhang, Linyao Chen, Jianhao Chen, Ping Jian, Peng Ye, Qiaosheng Zhang, Shuyue Hu, 14 Oct 2025, ICL-Router: In-Context Learned Model Representations for LLM Routing, https://arxiv.org/abs/2510.09719
  • Serena Gomez Wannaz, 30 Sep 2025, ICL Optimized Fragility, https://arxiv.org/abs/2510.00300
  • Wa\"iss Azizian, Ali Hasan, 1 Oct 2025, How Does the Pretraining Distribution Shape In-Context Learning? Task Selection, Generalization, and Robustness, https://arxiv.org/abs/2510.01163
  • Youngju Yoo, Jiaheng Hu, Yifeng Zhu, Bo Liu, Qiang Liu, Roberto Mart\'in-Mart\'in, Peter Stone, 24 Sep 2025, RoboSSM: Scalable In-context Imitation Learning via State-Space Models, https://arxiv.org/abs/2509.19658
  • Tianle Zhang, Wanlong Fang, Jonathan Woo, Paridhi Latawa, Deepak A.Subramanian, Alvin Chan, 24 Sep 2025, Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning, https://arxiv.org/abs/2509.17552
  • Zihan Chen, Song Wang, Xingbo Fu, Chengshuai Shi, Zhenyu Lei, Cong Shen, Jundong Li, 28 Oct 2025, From Cross-Task Examples to In-Task Prompts: A Graph-Based Pseudo-Labeling Framework for In-context Learning, https://arxiv.org/abs/2510.24528
  • Gabriel O. dos Santos, Esther Colombini, Sandra Avila, 28 Oct 2025, What do vision-language models see in the context? Investigating multimodal in-context learning, https://arxiv.org/abs/2510.24331
  • Wenhao Wu, Fuhong Liu, Haoru Li, Zican Hu, Daoyi Dong, Chunlin Chen, Zhi Wang, 28 Oct 2025, Mixture-of-Experts Meets In-Context Reinforcement Learning, https://arxiv.org/abs/2506.05426
  • Vahid Balazadeh, Hamidreza Kamkari, Valentin Thomas, Benson Li, Junwei Ma, Jesse C. Cresswell, Rahul G. Krishnan, 27 Oct 2025, CausalPFN: Amortized Causal Effect Estimation via In-Context Learning, https://arxiv.org/abs/2506.07918
  • Subhojyoti Mukherjee, Josiah P. Hanna, Qiaomin Xie, Robert Nowak, 22 Oct 2025, Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning, https://arxiv.org/abs/2406.05064
  • Devvrit Khatri, Pranamya Kulkarni, Nilesh Gupta, Yerram Varun, Liqian Peng, Jay Yagnik, Praneeth Netrapalli, Cho-Jui Hsieh, Alec Go, Inderjit S Dhillon, Aditya Kusupati, Prateek Jain, 17 Oct 2025, Compressing Many-Shots in In-Context Learning, https://arxiv.org/abs/2510.16092
  • Huai-Chih Wang, Hsiang-Chun Chuang, Hsi-Chun Cheng, Dai-Jie Wu, Shao-Hua Sun, 18 Oct 2025, CooT: Learning to Coordinate In-Context with Coordination Transformers, https://arxiv.org/abs/2506.23549
  • Tim Genewein, Li Kevin Wenliang, Jordi Grau-Moya, Anian Ruoss, Laurent Orseau, Marcus Hutter, 17 Oct 2025, Understanding Prompt Tuning and In-Context Learning via Meta-Learning, https://arxiv.org/abs/2505.17010
  • Aryaman Arora, Dan Jurafsky, Christopher Potts, Noah D. Goodman, 22 Sep 2025, Bayesian scaling laws for in-context learning, https://arxiv.org/abs/2410.16531
  • Ronald Seoh, Dan Goldwasser, 19 Sep 2025, EmoGist: Efficient In-Context Learning for Visual Emotion Understanding, https://arxiv.org/abs/2505.14660
  • Bingqing Song, Jiaxiang Li, Rong Wang, Songtao Lu, Mingyi Hong, 26 Oct 2025, A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning, https://arxiv.org/abs/2510.22594
  • Andrei Baroian, 25 Oct 2025, Supervised Fine-Tuning or In-Context Learning? Evaluating LLMs for Clinical NER, https://arxiv.org/abs/2510.22285
  • Shenran Wang, Timothy Tin-Long Tse, Jian Zhu, 27 Oct 2025, Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures, https://arxiv.org/abs/2510.23006
  • Tianyi Ma, Tengyao Wang, Richard J. Samworth, 27 Oct 2025, Provable test-time adaptivity and distributional robustness of in-context learning, https://arxiv.org/abs/2510.23254
  • Taejong Joo, Diego Klabjan, 25 Oct 2025, Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context, https://arxiv.org/abs/2502.04580
  • Patrick Kahardipraja, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin, 27 Oct 2025, The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation, https://arxiv.org/abs/2505.15807
  • Yousef Emami, Hao Zhou, SeyedSina Nabavirazani, Luis Almeida, 15 Oct 2025, LLM-Enabled In-Context Learning for Data Collection Scheduling in UAV-assisted Sensor Networks, https://arxiv.org/abs/2504.14556
  • Fan Wang, Zhiyuan Chen, Yuxuan Zhong, Sunjian Zheng, Pengtao Shao, Bo Yu, Shaoshan Liu, Jianan Wang, Ning Ding, Yang Cao and Yu Kang, 26 Sep 2025, Context and Diversity Matter: The Emergence of In-Context Learning in World Models, https://arxiv.org/abs/2509.22353
  • Aayush Mishra, Daniel Khashabi, Anqi Liu, 26 Sep 2025, IA2: Alignment with ICL Activations Improves Supervised Fine-Tuning, https://arxiv.org/abs/2509.22621
  • Jinmei Liu, Fuhong Liu, Jianye Hao, Bo Wang, Huaxiong Li, Chunlin Chen, Zhi Wang, 26 Sep 2025, Scalable In-Context Q-Learning, https://arxiv.org/abs/2506.01299
  • Zhaochun Ren, Zhou Yang, Chenglong Ye, Haizhou Sun, Chao Chen, Xiaofei Zhu, Xiangwen Liao, 8 Oct 2025, Fine-Grained Emotion Recognition via In-Context Learning, https://arxiv.org/abs/2510.06600
  • Taylor Sorensen and Yejin Choi, 8 Oct 2025, Opt-ICL at LeWiDi-2025: Maximizing In-Context Signal from Rater Examples via Meta-Learning, https://arxiv.org/abs/2510.07105
  • Andrea Wynn and Metod Jazbec and Charith Peris and Rinat Khaziev and Anqi Liu and Daniel Khashabi and Eric Nalisnick, 2 Oct 2025, Safe and Efficient In-Context Learning via Risk Control, https://arxiv.org/abs/2510.02480
  • Jiuqi Wang, Rohan Chandra, Shangtong Zhang, 3 Oct 2025, Towards Provable Emergence of In-Context Reinforcement Learning, https://arxiv.org/abs/2509.18389
  • Frank Cole, Yuxuan Zhao, Yulong Lu, Tianhao Zhang, 21 Oct 2025, In-Context Learning of Linear Dynamical Systems with Transformers: Approximation Bounds and Depth-Separation, https://arxiv.org/abs/2502.08136
  • Patrick Seifner and Kostadin Cvejoski and David Berghaus and Cesar Ojeda and Ramses J. Sanchez, 21 Oct 2025, In-Context Learning of Stochastic Differential Equations with Foundation Inference Models, https://arxiv.org/abs/2502.19049
  • Zhaiming Shen, Alexander Hsu, Rongjie Lai, Wenjing Liao, 21 Oct 2025, Understanding In-Context Learning on Structured Manifolds: Bridging Attention to Kernel Methods, https://arxiv.org/abs/2506.10959
  • Tongxi Wang, Zhuoyang Xia, 25 Sep 2025, Theoretical Bounds for Stable In-Context Learning, https://arxiv.org/abs/2509.20677
  • Hakaze Cho, Haolin Yang, Gouki Minegishi, Naoya Inoue, 25 Sep 2025, Mechanism of Task-oriented Information Removal in In-context Learning, https://arxiv.org/abs/2509.21012
  • Huaze Tang and Tianren Peng and Shao-lun Huang, 25 Sep 2025, On Theoretical Interpretations of Concept-Based In-Context Learning, https://arxiv.org/abs/2509.20882
  • Liuwang Kang, Fan Wang, Shaoshan Liu, Hung-Chyun Chou, Chuan Lin, and Ning Ding, 26 Sep 2025, In-Context Learning can Perform Continual Learning Like Humans, https://arxiv.org/abs/2509.22764
  • Wenhao Zhang, Shao Zhang, Xihuai Wang, Yang Li, Ying Wen, 27 Sep 2025, Towards Monotonic Improvement in In-Context Reinforcement Learning, https://arxiv.org/abs/2509.23209
  • Qingren Yao, Ming Jin, Chengqi Zhang, Chao-Han Huck Yang, Jun Qi, Shirui Pan, 28 Sep 2025, Estimating Time Series Foundation Model Transferability via In-Context Learning, https://arxiv.org/abs/2509.23695
  • Qiushui Xu, Yuhao Huang, Yushu Jiang, Lei Song, Jinyu Wang, Wenliang Zheng, Jiang Bian, 28 Sep 2025, In-Context Compositional Q-Learning for Offline Reinforcement Learning, https://arxiv.org/abs/2509.24067
  • David Berghaus, Patrick Seifner, Kostadin Cvejoski, C\'esar Ojeda, Rams\'es J. S\'anchez, 29 Sep 2025, In-Context Learning of Temporal Point Processes with Foundation Inference Models, https://arxiv.org/abs/2509.24762
  • Junchuan Zhao, Xintong Wang, Ye Wang, 21 May 2025, Prosody-Adaptable Audio Codecs for Zero-Shot Voice Conversion via In-Context Learning, https://arxiv.org/abs/2505.15402
  • Mohammed Sabry, Anya Belz, 26 Sep 2025, What Matters More For In-Context Learning under Matched Compute Budgets: Pretraining on Natural Text or Incorporating Targeted Synthetic Examples?, https://arxiv.org/abs/2509.22947
  • Hamidreza Rouzegar and Masoud Makrehchi, 27 Sep 2025, The Impact of Role Design in In-Context Learning for Large Language Models, https://arxiv.org/abs/2509.23501
  • Yuxin Jiang, Yuchao Gu, Yiren Song, Ivor Tsang, Mike Zheng Shou, 29 Sep 2025, Personalized Vision via Visual In-Context Learning, https://arxiv.org/abs/2509.25172
  • Andrey Polubarov, Nikita Lyubaykin, Alexander Derevyagin, Ilya Zisman, Denis Tarasov, Alexander Nikulin, Vladislav Kurenkov, 29 Sep 2025, Vintix: Action Model via In-Context Reinforcement Learning, https://arxiv.org/abs/2501.19400
  • Fan Wang, Pengtao Shao, Yiming Zhang, Bo Yu, Shaoshan Liu, Ning Ding, Yang Cao, Yu Kang, Haifeng Wang, 28 Sep 2025, Towards Large-Scale In-Context Reinforcement Learning by Meta-Training in Randomized Worlds, https://arxiv.org/abs/2502.02869
  • Paulius Sasnauskas, Yi\u{g}it Yal{\i}n, Goran Radanovi\'c, 26 Sep 2025, Can In-Context Reinforcement Learning Recover From Reward Poisoning Attacks?, https://arxiv.org/abs/2506.06891
  • Hakaze Cho, Peng Luo, Mariko Kato, Rin Kaenbyou, Naoya Inoue, 27 Sep 2025, Mechanistic Fine-tuning for In-context Learning, https://arxiv.org/abs/2505.14233
  • Honghao Fu, Yuan Ouyang, Kai-Wei Chang, Yiwei Wang, Zi Huang, Yujun Cai, 6 Oct 2025, ContextNav: Towards Agentic Multimodal In-Context Learning, https://arxiv.org/abs/2510.04560
  • Rabeya Amin Jhuma and Mostafa Mohaimen Akand Faisal, 4 Oct 2025, From Theory to Practice: Evaluating Data Poisoning Attacks and Defenses in In-Context Learning on Social Media Health Discourse, https://arxiv.org/abs/2510.03636
  • Weishuo Ma, Yanbo Wang, Xiyuan Wang, Lei Zou, Muhan Zhang, 6 Oct 2025, GILT: An LLM-Free, Tuning-Free Graph Foundational Model for In-Context Learning, https://arxiv.org/abs/2510.04567
  • Kaito Takanami, Takashi Takahashi, and Yoshiyuki Kabashima, 6 Oct 2025, Learning Linear Regression with Low-Rank Tasks in-Context, https://arxiv.org/abs/2510.04548
  • Alessio Russo, Ryan Welch, Aldo Pacchiano, 6 Oct 2025, In-Context Learning for Pure Exploration, https://arxiv.org/abs/2506.01876
  • Jelena Bratuli\'c, Sudhanshu Mittal, David T. Hoffmann, Samuel B\"ohm, Robin Tibor Schirrmeister, Tonio Ball, Christian Rupprecht, Thomas Brox, 6 Oct 2025, Unlocking In-Context Learning for Natural Datasets Beyond Language Modelling, https://arxiv.org/abs/2501.06256
  • Jiachen Jiang and Yuxin Dong and Jinxin Zhou and Zhihui Zhu, 4 Oct 2025, From Compression to Expression: A Layerwise Analysis of In-Context Learning, https://arxiv.org/abs/2505.17322
  • Jiachen Jiang and Zhen Qin and Zhihui Zhu, 9 Oct 2025, In-Context Learning for Non-Stationary MIMO Equalization, https://arxiv.org/abs/2510.08711
  • Zhaochun Ren, Zhou Yang, Chenglong Ye, Yufeng Wang, Haizhou Sun, Chao Chen, Xiaofei Zhu, Yunbing Wu, Xiangwen Liao, 10 Oct 2025, E-ICL: Enhancing Fine-Grained Emotion Recognition through the Lens of Prototype Theory, https://arxiv.org/abs/2406.02642
  • Marta Contreiras Silva, Daniel Faria, Catia Pesquita, 24 Oct 2025, CMOMgen: Complex Multi-Ontology Alignment via Pattern-Guided In-Context Learning, https://arxiv.org/abs/2510.21656
  • Pan Chen, Shaohong Chen, Mark Wang, Shi Xuan Leong, Priscilla Fung, Varinia Bernales, Alan Aspuru-Guzik, 23 Oct 2025, Schema for In-Context Learning, https://arxiv.org/abs/2510.13905
  • Sarthak Mittal, Divyat Mahajan, Guillaume Lajoie, Mohammad Pezeshki, 13 Oct 2025, Iterative Amortized Inference: Unifying In-Context Learning and Learned Optimizers, https://arxiv.org/abs/2510.11471
  • Tomoya Wakayama, Taiji Suzuki, 13 Oct 2025, In-Context Learning Is Provably Bayesian Inference: A Generalization Theory for Meta-Learning, https://arxiv.org/abs/2510.10981
  • Haoyuan Sun, Ali Jadbabaie, Navid Azizan, 11 Oct 2025, On the Role of Transformer Feed-Forward Layers in Nonlinear In-Context Learning, https://arxiv.org/abs/2501.18187
  • Yali Du, Hui Sun and Ming Li, 12 Oct 2025, Post-Incorporating Code Structural Knowledge into Pretrained Models via ICL for Code Translation, https://arxiv.org/abs/2503.22776
  • Sofia Kirsanova, Yao-Yi Chiang, Weiwei Duan, 9 Oct 2025, Detecting Legend Items on Historical Maps Using GPT-4o with In-Context Learning, https://arxiv.org/abs/2510.08385
  • Ioana Marinescu, Kyunghyun Cho, Eric Karl Oermann, 9 Oct 2025, On the Relationship Between the Choice of Representation and In-Context Learning, https://arxiv.org/abs/2510.08372
  • Xinyan Hu, Kayo Yin, Michael I. Jordan, Jacob Steinhardt, Lijie Chen, 9 Oct 2025, Understanding In-context Learning of Addition via Activation Subspaces, https://arxiv.org/abs/2505.05145
  • Abhiti Mishra, Yash Patel, Ambuj Tewari, 8 Oct 2025, Continuum Transformers Perform In-Context Learning by Operator Gradient Descent, https://arxiv.org/abs/2505.17838
  • Mingen Li, Houjian Yu, Yixuan Huang, Youngjin Hong, Changhyun Choi, 22 Oct 2025, Hierarchical DLO Routing with Reinforcement Learning and In-Context Vision-language Models, https://arxiv.org/abs/2510.19268
  • Amir Moeini, Minjae Kwon, Alper Kamil Bozkurt, Yuichi Motai, Rohan Chandra, Lu Feng, Shangtong Zhang, 29 Sep 2025, Safe In-Context Reinforcement Learning, https://arxiv.org/abs/2509.25582
  • Kento Kuwataka, Taiji Suzuki, 30 Sep 2025, Test time training enhances in-context learning of nonlinear functions, https://arxiv.org/abs/2509.25741
  • Mary I. Letey, Jacob A. Zavatone-Veth, Yue M. Lu, Cengiz Pehlevan, 30 Sep 2025, Pretrain-Test Task Alignment Governs Generalization in In-Context Learning, https://arxiv.org/abs/2509.26551
  • Andrei I. Muresanu, Anvith Thudi, Michael R. Zhang, Nicolas Papernot, 29 Sep 2025, Fast Exact Unlearning for In-Context Learning Data for LLMs, https://arxiv.org/abs/2402.00751
  • Yousef Emami, Seyedsina Nabavirazavi, Jingjing Zheng, Hao Zhou, Miguel Gutierrez Gaitan, Kai Li, Luis Almeida, 7 Oct 2025, Joint Communication Scheduling and Velocity Control for Multi-UAV-Assisted Post-Disaster Monitoring: An Attention-Based In-Context Learning Approach, https://arxiv.org/abs/2510.05698
  • Haneul Yoo, Jiho Jin, Kyunghyun Cho, Alice Oh, 7 Oct 2025, Code-Switching In-Context Learning for Cross-Lingual Transfer of Large Language Models, https://arxiv.org/abs/2510.05678
  • Jingcheng Niu, Subhabrata Dutta, Ahmed Elshabrawy, Harish Tayyar Madabushi, Iryna Gurevych, 6 Oct 2025, Illusion or Algorithm? Investigating Memorization, Emergence, and Symbolic Processing in In-Context Learning, https://arxiv.org/abs/2505.11004
  • Ling Zhang, Xianliang Yang, Juwon Yu, Park Cheonyoung, Lei Song, Jiang Bian, 16 Oct 2025, Holdout-Loss-Based Data Selection for LLM Finetuning via In-Context Learning, https://arxiv.org/abs/2510.14459
  • Xinyao Liao, Xianfang Zeng, Ziye Song, Zhoujie Fu, Gang Yu, Guosheng Lin, 16 Oct 2025, In-Context Learning with Unpaired Clips for Instruction-based Video Editing, https://arxiv.org/abs/2510.14648

AI Books from Aussie AI



The Sweetest Lesson: Your Brain Versus AI The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
  • Your brain is 50 times bigger than the best AI engines.
  • Truly intelligent AI will require more compute!
  • Another case of the bitter lesson?
  • Maybe it's the opposite of that: the sweetest lesson.

Get your copy from Amazon: The Sweetest Lesson



RAG Optimization RAG Optimization: Accurate and Efficient LLM Applications: new book on RAG architectures:
  • Smarter RAG
  • Faster RAG
  • Cheaper RAG
  • Agentic RAG
  • RAG reasoning

Get your copy from Amazon: RAG Optimization



Generative AI in C++ Generative AI Applications book:
  • Deciding on your AI project
  • Planning for success and safety
  • Designs and LLM architectures
  • Expediting development
  • Implementation and deployment

Get your copy from Amazon: Generative AI Applications



Generative AI in C++ Generative AI programming book:
  • Generative AI coding in C++
  • Transformer engine speedups
  • LLM models
  • Phone and desktop AI
  • Code examples
  • Research citations

Get your copy from Amazon: Generative AI in C++



CUDA C++ Optimization CUDA C++ Optimization book:
  • Faster CUDA C++ kernels
  • Optimization tools & techniques
  • Compute optimization
  • Memory optimization

Get your copy from Amazon: CUDA C++ Optimization



CUDA C++ Optimization CUDA C++ Debugging book:
  • Debugging CUDA C++ kernels
  • Tools & techniques
  • Self-testing & reliability
  • Common GPU kernel bugs

Get your copy from Amazon: CUDA C++ Debugging

More AI Research

Read more about: