DA-BAG: A Multi-Model Fusion Text Classification Method Combining BERT and GCN Using Self-Domain Adversarial Training

Author:

Shao Dangguo1,Su Shun1,Ma Lei1,Yi Sanli1,Lai Hua1

Affiliation:

1. Kunming University of Science and Technology

Abstract

Abstract

Both pre-training-based methods and GNN-based methods are considered the most advanced techniques in natural language processing tasks, particularly in text classification tasks. However, traditional graph learning methods focus solely on structured information from text to graph, overlooking the hidden local information within the syntactic structure of the text. Conversely, large-scale pre-training model methods tend to overlook global semantic information, potentially introducing new noise and training biases when combined. To tackle these challenges, we introduce DA-BAG, a novel approach that co-trains BERT and graph convolution models. Utilizing a self-domain adversarial training method on a single dataset, DA-BAG extracts multi-domain distribution features across multiple models, enabling self-adversarial domain adaptation training without the need for additional data, thereby enhancing model generalization and robustness. Furthermore, by incorporating an attention mechanism in multiple models, DA-BAG effectively combines the structural semantics of the graph with the token-level semantics of the pre-trained model, leveraging hidden information within the text's syntactic structure. Additionally, a sequential multi-layer graph convolutional neural(GCN) connection structure based on a residual pre-activation variant is employed to stabilize the feature distribution of graph data and adjust the graph data structure accordingly. Extensive evaluations on 5 datasets(20NG, R8, R52, Ohsumed, MR) demonstrate that DA-BAG achieves state-of-the-art performance across a diverse range of datasets.

Publisher

Springer Science and Business Media LLC

Reference56 articles.

1. J{\'a}{\ n}ez-Martino, Francisco and Alaiz-Rodr{\'\i}guez, Roc{\'\i}o and Gonz{\'a}lez-Castro, V{\'\i}ctor and Fidalgo, Eduardo and Alegre, Enrique (2023) A review of spam email detection: analysis of spammer strategies and the dataset shift problem. Artificial Intelligence Review 56(2): 1145--1173 {\color{blue} \href{https://dx.doi.org/10.1007/s10462-022-10195-4}{https://dx.doi.org/10.1007/s10462-022-10195-4}}, Springer

2. Hofmann, Katja and Li, Lihong and Radlinski, Filip and others (2016) Online evaluation for information retrieval. Foundations and Trends{\textregistered} in Information Retrieval 10(1): 1--117 {\color{blue} \href{https://dx.doi.org/10.1561/1500000051} {https://dx.doi.org/10.1561/1500000051}}, Now Publishers, Inc.

3. El-Manstrly, Dahlia and Ali, Faizan and Line, Nathan (2021) Severe service failures and online vindictive word of mouth: The effect of coping strategies. International Journal of Hospitality Management 95: 102911 {\color{blue} \href{https://dx.doi.org/10.1561/1500000051} {https://dx.doi.org/10.1561/1500000051}}, Elsevier

4. Wang, Shan Huei (2017) Web-based medical service: technology attractiveness, medical creditability, information source, and behavior intention. Journal of medical Internet research 19(8): e285 {\color{blue} \href{https://dx.doi.org/10.2196/jmir.8114} {https://dx.doi.org/10.2196/jmir.8114}}, JMIR Publications Toronto, Canada

5. Siebers, Philipp and Janiesch, Christian and Zschech, Patrick (2022) A survey of text representation methods and their genealogy. IEEE Access 10: 96492--96513 {\color{blue} \href{https://dx.doi.org/10.1109/access.2022.3205719} {https://dx.doi.org/10.1109/access.2022.3205719}}, IEEE

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3