Author:
Kreowsky Philipp,Knapheide Justin,Stabernack Benno
Publisher
Springer Nature Switzerland
Reference23 articles.
1. Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput. Surv. (CSUR) 52(4), 1–43 (2019)
2. Narayanan, D., et al.: Pipedream: generalized pipeline parallelism for dnn training. In: Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP ’19, pp. 1–15. Association for Computing Machinery, New York (2019)
3. Unnikrishnan, N.K., Parhi, K.K.: LayerPipe: accelerating deep neural network training by intra-layer and inter-layer gradient pipelining and multiprocessor scheduling. In: 2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD). IEEE (2021)
4. Huang, Y., et al.: Gpipe: efficient training of giant neural networks using pipeline parallelism. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems (2019). https://proceedings.neurips.cc/paper_files/paper/2019/file/093f65e080a295f8076b1c5722a46aa2-Paper.pdf
5. Wang, T., Geng, T., Li, A., Jin, X., Herbordt, M.: Fpdeep: scalable acceleration of CNN training on deeply-pipelined FPGA clusters. IEEE Trans. Comput. 69, 1143–1158 (2020)