1. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2023, May 15). Language Models Are Unsupervised Multitask Learners. OpenAI Technical Report. Available online: https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf.
2. Language models are few-shot learners;Brown;Adv. Neural Inf. Process. Syst.,2020
3. Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2023, May 15). Improving Language Understanding by Generative Pre-Training. OpenAI Technical Report. Available online: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
4. Lee, M. (2023). A Mathematical Investigation of Hallucination and Creativity in GPT Models. Mathematics, 11.
5. Photorealistic text-to-image diffusion models with deep language understanding;Saharia;Adv. Neural Inf. Process. Syst.,2022