1. N. Shazeer et al., "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer," arXiv preprint arXiv:1701.06538, 2017.
2. Z.-H. Zhou, Ensemble methods: foundations and algorithms. CRC press, 2012.
3. W. Li, Y. Peng, M. Zhang, L. Ding, H. Hu, and L. Shen, "Deep model fusion: A survey," arXiv preprint arXiv:2309.15698, 2023.
4. M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro, "Megatron-lm: Training multi-billion parameter language models using model parallelism," arXiv preprint arXiv:1909.08053, 2019.
5. A. Vaswani, "Attention is all you need," Advances in Neural Information Processing Systems, 2017.