Aussie AI
Prompt Tuning
-
Last Updated 29 August, 2025
-
by David Spuler, Ph.D.
Prompt tuning is an LLM optimization that adds special "prompt tokens" to the input sequence. The goals of prompt tuning are similar to fine-tuning, but with improved efficiency compared with adjusting all the model parameters. Thus it is analogous to Parameter-Efficent Fine-Tuning (PEFT), including LoRA, but it works in a completely different way.
Note that the term "prompt tuning" is sometimes used in its general meaning, which means tuning of prompts. In this case, it is referring to automatic prompt optimization, or "programmatic prompting."
Research on Prompt Tuning
- IBM, 2024, What is prompt-tuning?, https://research.ibm.com/blog/what-is-ai-prompt-tuning
- Abhinav Jain, Swarat Chaudhuri, Thomas Reps, Chris Jermaine, 24 May 2024, Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation, https://arxiv.org/abs/2405.15282
- MohammadAli SadraeiJavaeri, Ehsaneddin Asgari, Alice Carolyn McHardy, Hamid Reza Rabiee, 7 Jun 2024, SuperPos-Prompt: Enhancing Soft Prompt Tuning of Language Models with Superposition of Multi Token Embeddings, https://arxiv.org/abs/2406.05279
- Martin Wistuba, Prabhu Teja Sivaprasad, Lukas Balles, Giovanni Zappella, 5 Jun 2024, Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All You Need, https://arxiv.org/abs/2406.03216
- Xuyang Wu, Zhiyuan Peng, Sravanthi Rajanala, Hsin-Tai Wu, Yi Fang, 31 May 2024, Passage-specific Prompt Tuning for Passage Reranking in Question Answering with Large Language Models, https://arxiv.org/abs/2405.20654
- Wei Zhu, Aaron Xuxiang Tian, Congrui Yin, Yuan Ni, Xiaoling Wang, Guotong Xie, 7 Jun 2024 (v2), IAPT: Instruction-Aware Prompt Tuning for Large Language Models, https://arxiv.org/abs/2405.18203
- Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, Qipeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, Qiyang Zhang, Zhenyan Lu, Li Zhang, Shangguang Wang, Yuanchun Li, Yunxin Liu, Xin Jin, Xuanzhe Liu, 16 Jan 2024, A Survey of Resource-efficient LLM and Multimodal Foundation Models, https://arxiv.org/abs/2401.08092 Project: https://github.com/UbiquitousLearning/Efficient_Foundation_Model_Survey
- 18 Apr 2024 (v2), The Efficiency Spectrum of Large Language Models: An Algorithmic Survey, Tianyu Ding, Tianyi Chen, Haidong Zhu, Jiachen Jiang, Yiqi Zhong, Jinxin Zhou, Guangzhi Wang, Zhihui Zhu, Ilya Zharkov, Luming Liang, https://arxiv.org/abs/2312.00678
- M Xu, D Cai, W Yin, S Wang, X Jin, X Liu - ACM Computing Surveys, 2024, Resource-efficient Algorithms and Systems of Foundation Models: A Survey, https://dl.acm.org/doi/pdf/10.1145/3706418
- Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. https://aclanthology.org/2021.emnlp-main.243/
- Shreyansh Shah, Oct 18, 2023, Prompt Tuning: A Powerful Technique for Adapting LLMs to New Tasks, https://medium.com/@shahshreyansh20/prompt-tuning-a-powerful-technique-for-adapting-llms-to-new-tasks-6d6fd9b83557
- Data Camp, May 19, 2024, Understanding Prompt Tuning: Enhance Your Language Models with Precision, https://www.datacamp.com/tutorial/understanding-prompt-tuning
- Sergey Sedov, Sumanth Bharadwaj Hachalli Karanam, Venu Gopal Kadamba, 24 Dec 2024, Exploring Embedding Priors in Prompt-Tuning for Improved Interpretability and Control, https://arxiv.org/abs/2412.18582
- Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, Jie Tang, 20 Mar 2022 (v3), P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks, Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics, 2022, https://arxiv.org/abs/2110.07602 https://github.com/THUDM/P-tuning-v2 (Extends prompt tuning with extra soft prompt tokens at every layer, not just at the start of the input.)
- Haowei Zhu, Fangyuan Zhang, Rui Qin, Tianxiang Pan, Junhai Yong, Bin Wang, 24 Dec 2024 (v2), Semantic Hierarchical Prompt Tuning for Parameter-Efficient Fine-Tuning, https://arxiv.org/abs/2412.16956
- Xiang Lisa Li, Percy Liang, 1 Jan 2021, Prefix-Tuning: Optimizing Continuous Prompts for Generation, https://arxiv.org/abs/2101.00190 (Precursor to prompt tuning.)
- Andrea Matarazzo, Riccardo Torlone, 3 Jan 2025, A Survey on Large Language Models with some Insights on their Capabilities and Limitations, https://arxiv.org/abs/2501.04040 (Broad survey with many LLM topics covered from history to architectures to optimizations.)
- Qi Sun, Edoardo Cetin, Yujin Tang, 14 Jan 2025 (v2), Transformer2: Self-adaptive LLMs, https://arxiv.org/abs/2501.06252 (Using a vector to fine-tuning dynamically.)
- Liu Yang, Ziqian Lin, Kangwook Lee, Dimitris Papailiopoulos, Robert Nowak, 16 Jan 2025, Task Vectors in In-Context Learning: Emergence, Formation, and Benefit, https://arxiv.org/abs/2501.09240
- Dan Zhang, Tao Feng, Lilong Xue, Yuandong Wang, Yuxiao Dong, Jie Tang, 23 Jan 2025, Parameter-Efficient Fine-Tuning for Foundation Models, https://arxiv.org/abs/2501.13787
- X Li, C Jiang, Nov 2024, Optimizing Prompt Engineering Methods for Enhanced Logical Reasoning in Transformer Models, RMEL ’24, November 4–7, 2024, Hangzhou, China, https://www.researchgate.net/profile/Xiaoyan-Li-42/publication/389182048_Optimizing_Prompt_Engineering_Methods_for_Enhanced_Logical_Reasoning_in_Transformer_Models/links/67b82fa9461fb56424e3fc72/Optimizing-Prompt-Engineering-Methods-for-Enhanced-Logical-Reasoning-in-Transformer-Models.pdf https://github.com/xiaoyanLi629/RMELS2024
- Anushka Tiwari, Sayantan Pal, Rohini K. Srihari, Kaiyi Ji, 19 Jul 2025, Task-Agnostic Continual Prompt Tuning with Gradient-Based Selection and Decoding, https://arxiv.org/abs/2507.14725
- Lingyun Huang, Jianxu Mao, Junfei Yi, Ziming Tao, Yaonan Wang, 19 Jul 2025, CVPT: Cross Visual Prompt Tuning, https://arxiv.org/abs/2408.14961
- Ruijun Feng, Hammond Pearce, Pietro Liguori, Yulei Sui, 21 Jul 2025, CGP-Tuning: Structure-Aware Soft Prompt Tuning for Code Vulnerability Detection, https://arxiv.org/abs/2501.04510
- Jiong Yin, Liang Li, Jiehua Zhang, Yuhan Gao, Chenggang Yan, Xichun Sheng, 29 Jul 2025, Progressive Homeostatic and Plastic Prompt Tuning for Audio-Visual Multi-Task Incremental Learning, https://arxiv.org/abs/2507.21588
- Fei Zhang, Tianfei Zhou, Jiangchao Yao, Ya Zhang, Ivor W. Tsang, Yanfeng Wang, 1 Aug 2025, Decouple before Align: Visual Disentanglement Enhances Prompt Tuning, https://arxiv.org/abs/2508.00395
- Haitong Luo, Suhang Wang, Weiyao Zhang, Ruiqi Meng, Xuying Meng, Yujun Zhang, 15 Aug 2025, Generalize across Homophily and Heterophily: Hybrid Spectral Graph Pre-Training and Prompt Tuning, https://arxiv.org/abs/2508.11328
- Zian Zhai, Sima Qing, Xiaoyang Wang, Wenjie Zhang, 17 Aug 2025, SGPT: Few-Shot Prompt Tuning for Signed Graphs, https://arxiv.org/abs/2412.12155
- Pi-Wei Chen, Jerry Chun-Wei Lin, Wei-Han Chen, Jia Ji, Zih-Ching Chen, Feng-Hao Yeh, Chao-Chun Chen, 22 Aug 2025, Beyond Human-prompting: Adaptive Prompt Tuning with Semantic Alignment for Anomaly Detection, https://arxiv.org/abs/2508.16157
- Finn Rietz, Oleg Smirnov, Sara Karimi, Lele Cao, 18 Jul 2025, Prompt-Tuning Bandits: Enabling Few-Shot Generalization for Efficient Multi-Task Offline RL, https://arxiv.org/abs/2502.06358
- Ivan Zhang, 10 Aug 2025, A Real-Time, Self-Tuning Moderator Framework for Adversarial Prompt Detection, https://arxiv.org/abs/2508.07139
- Ali Shakeri, Wei Emma Zhang, Amin Beheshti, Weitong Chen, Jian Yang and Lishan Yang, 22 Jul 2025, FedDPG: An Adaptive Yet Efficient Prompt-tuning Approach in Federated Learning Settings, https://arxiv.org/abs/2507.19534
- Xinxu Wei, Kanhao Zhao, Yong Jiao, Lifang He and Yu Zhang, 3 Aug 2025, A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and Disorder, https://arxiv.org/abs/2506.02044
- Han Gao, Timo Hartmann, Botao Zhong, Kai Lia, Hanbin Luo, 5 Aug 2025, Domain-Specific Fine-Tuning and Prompt-Based Learning: A Comparative Study for developing Natural Language-Based BIM Information Retrieval Systems, https://arxiv.org/abs/2508.05676
AI Books from Aussie AI
![]() |
The Sweetest Lesson: Your Brain Versus AI: new book on AI intelligence theory:
Get your copy from Amazon: The Sweetest Lesson |
![]() |
RAG Optimization: Accurate and Efficient LLM Applications:
new book on RAG architectures:
Get your copy from Amazon: RAG Optimization |
![]() |
Generative AI Applications book:
Get your copy from Amazon: Generative AI Applications |
![]() |
Generative AI programming book:
Get your copy from Amazon: Generative AI in C++ |
![]() |
CUDA C++ Optimization book:
Get your copy from Amazon: CUDA C++ Optimization |
![]() |
CUDA C++ Debugging book:
Get your copy from Amazon: CUDA C++ Debugging |
More AI Research
Read more about: