A Review on Large-Scale Data Processing with Parallel and Distributed Randomized Extreme Learning Machine Neural Networks

Author:

Gelvez-Almeida Elkin12ORCID,Mora Marco3ORCID,Barrientos Ricardo J.3ORCID,Hernández-García Ruber3ORCID,Vilches-Ponce Karina3ORCID,Vera Miguel2ORCID

Affiliation:

1. Departamento de Matemática, Física y Estadística, Facultad de Ciencias Básicas, Universidad Católica del Maule, Talca 3480112, Chile

2. Centro de Crecimiento Empresarial—MACONDOLAB, Facultad de Ciencias Básicas y Biomédicas, Universidad Simón Bolívar, San José de Cúcuta 540006, Colombia

3. Laboratory of Technological Research in Pattern Recognition (LITRP), Depto. DCI, Facultad de Ciencias de la Ingeniería, Universidad Católica del Maule, Talca 3480112, Chile

Abstract

The randomization-based feedforward neural network has raised great interest in the scientific community due to its simplicity, training speed, and accuracy comparable to traditional learning algorithms. The basic algorithm consists of randomly determining the weights and biases of the hidden layer and analytically calculating the weights of the output layer by solving a linear overdetermined system using the Moore–Penrose generalized inverse. When processing large volumes of data, randomization-based feedforward neural network models consume large amounts of memory and drastically increase training time. To efficiently solve the above problems, parallel and distributed models have recently been proposed. Previous reviews of randomization-based feedforward neural network models have mainly focused on categorizing and describing the evolution of the algorithms presented in the literature. The main contribution of this paper is to approach the topic from the perspective of the handling of large volumes of data. In this sense, we present a current and extensive review of the parallel and distributed models of randomized feedforward neural networks, focusing on extreme learning machine. In particular, we review the mathematical foundations (Moore–Penrose generalized inverse and solution of linear systems using parallel and distributed methods) and hardware and software technologies considered in current implementations.

Funder

National Agency for Research and Development

Government of Chile

Publisher

MDPI AG

Reference156 articles.

1. Schmidt, W.F., Kraaijveld, M.A., and Duin, R.P. (September, January 30). Feed forward neural networks with random weights. Proceedings of the 11th IAPR International Conference on Pattern Recognition. Vol. II. Conference B: Pattern Recognition Methodology and Systems, The Hague, The Netherlands.

2. Functional-link net computing: Theory, system architecture, and functionalities;Pao;Computer,1992

3. Learning and generalization characteristics of the random vector functional-link net;Pao;Neurocomputing,1994

4. Huang, G.B., Zhu, Q.Y., and Siew, C.K. (2004, January 25–29). Extreme learning machine: A new learning scheme of feedforward neural networks. Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), Budapest, Hungary.

5. Extreme learning machines: A survey;Huang;Int. J. Mach. Learn. Cybern.,2011

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3