1. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing;Liu;ACM Comput. Surv.,2023
2. C. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu, et al., Lima: Less is more for alignment, arXiv preprint arXiv: 2305.11206 (2023).
3. Y. Arslan, K. Allix, L. Veiber, C. Lothritz, T.F. Bissyandé, J. Klein, A. Goujon, A comparison of pre-trained language models for multi-class text classification in the financial domain, in: Companion Proceedings of the Web Conference 2021, 2021, pp. 260–268.
4. H.W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, et al., Scaling instruction-finetuned language models, arXiv preprint arXiv: 2210.11416 (2022).
5. Contrastive explanations of plans through model restrictions;Krarup;J. Artif. Intell. Res.,2021