A Design Methodology for Fault-Tolerant Neuromorphic Computing Using Bayesian Neural Network
-
Published:2023-09-27
Issue:10
Volume:14
Page:1840
-
ISSN:2072-666X
-
Container-title:Micromachines
-
language:en
-
Short-container-title:Micromachines
Author:
Gao Di1, Xie Xiaoru2ORCID, Wei Dongxu3ORCID
Affiliation:
1. The School of Intelligent Manufacturing, Hangzhou Polytechnic, Hangzhou 311402, China 2. The School of Electronic Science and Engineering, Nanjing University, Nanjing 210023, China 3. The College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
Abstract
Memristor crossbar arrays are a promising platform for neuromorphic computing. In practical scenarios, the synapse weights represented by the memristors for the underlying system are subject to process variations, in which the programmed weight when read out for inference is no longer deterministic but a stochastic distribution. It is therefore highly desired to learn the weight distribution accounting for process variations, to ensure the same inference performance in memristor crossbar arrays as the design value. In this paper, we introduce a design methodology for fault-tolerant neuromorphic computing using a Bayesian neural network, which combines the variational Bayesian inference technique with a fault-aware variational posterior distribution. The proposed framework based on Bayesian inference incorporates the impacts of memristor deviations into algorithmic training, where the weight distributions of neural networks are optimized to accommodate uncertainties and minimize inference degradation. The experimental results confirm the capability of the proposed methodology to tolerate both process variations and noise, while achieving more robust computing in memristor crossbar arrays.
Subject
Electrical and Electronic Engineering,Mechanical Engineering,Control and Systems Engineering
Reference31 articles.
1. Chen, W.H., Li, K.X., Lin, W.Y., Hsu, K.H., Li, P.Y., Yang, C.H., Xue, C.X., Yang, E.Y., Chen, Y.K., and Chang, Y.S. (2018, January 11–15). A 65 nm 1 Mb nonvolatile computing-in-memory ReRAM macro with sub-16ns multiply-and-accumulate for binary DNN AI edge processors. Proceedings of the 2018 IEEE International Solid-State Circuits Conference-(ISSCC), San Francisco, CA, USA. 2. Song, L., Qian, X., Li, H., and Chen, Y. (2017, January 4–8). Pipelayer: A pipelined reram-based accelerator for deep learning. Proceedings of the 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA), Austin, TX, USA. 3. Training deep convolutional neural networks with resistive cross-point devices;Gokmen;Front. Neurosci.,2017 4. Shafiee, A., Nag, A., Muralimanohar, N., Balasubramonian, R., Strachan, J.P., Hu, M., Williams, R.S., and Srikumar, V. (2016, January 18–22). ISAAC: A convolutional neural network accelerator with in situ analog arithmetic in crossbars. Proceedings of the 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), Seoul, Republic of Korea. 5. RxNN: A framework for evaluating deep neural networks on resistive crossbars;Jain;IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.,2020
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Analysis of Memristor Neural Networks for fault Tolerant Computing;2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS);2024-04-18
|
|