Continual Learning of a Transformer-Based Deep Learning Classifier Using an Initial Model from Action Observation EEG Data to Online Motor Imagery Classification

Author:

Lee Po-Lei12,Chen Sheng-Hao1,Chang Tzu-Chien1,Lee Wei-Kung3,Hsu Hao-Teng12,Chang Hsiao-Huang45

Affiliation:

1. Department of Electrical Engineering, National Central University, Taoyuan 320, Taiwan

2. Pervasive Artificial Intelligence Research Labs, Hsinchu 300, Taiwan

3. Department of Rehabilitation, Taoyuan General Hospital, Taoyuan 330, Taiwan

4. Division of Cardiovascular Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei 112, Taiwan

5. Department of Surgery, School of Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan

Abstract

The motor imagery (MI)-based brain computer interface (BCI) is an intuitive interface that enables users to communicate with external environments through their minds. However, current MI-BCI systems ask naïve subjects to perform unfamiliar MI tasks with simple textual instruction or a visual/auditory cue. The unclear instruction for MI execution not only results in large inter-subject variability in the measured EEG patterns but also causes the difficulty of grouping cross-subject data for big-data training. In this study, we designed an BCI training method in a virtual reality (VR) environment. Subjects wore a head-mounted device (HMD) and executed action observation (AO) concurrently with MI (i.e., AO + MI) in VR environments. EEG signals recorded in AO + MI task were used to train an initial model, and the initial model was continually improved by the provision of EEG data in the following BCI training sessions. We recruited five healthy subjects, and each subject was requested to participate in three kinds of tasks, including an AO + MI task, an MI task, and the task of MI with visual feedback (MI-FB) three times. This study adopted a transformer- based spatial-temporal network (TSTN) to decode the user’s MI intentions. In contrast to other convolutional neural network (CNN) or recurrent neural network (RNN) approaches, the TSTN extracts spatial and temporal features, and applies attention mechanisms along spatial and temporal dimensions to perceive the global dependencies. The mean detection accuracies of TSTN were 0.63, 0.68, 0.75, and 0.77 in the MI, first MI-FB, second MI-FB, and third MI-FB sessions, respectively. This study demonstrated the AO + MI gave an easier way for subjects to conform their imagery actions, and the BCI performance was improved with the continual learning of the MI-FB training process.

Funder

National Science and Technology Council

National Central University

Publisher

MDPI AG

Subject

Bioengineering

全球学者库

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"全球学者库"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前全球学者库共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2023 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3