Exploring the Role of Explainable AI in Compliance Models for Fraud Prevention
-
Published:2024-06-27
Issue:5
Volume:13
Page:232-239
-
ISSN:2278-2540
-
Container-title:International Journal of Latest Technology in Engineering Management & Applied Science
-
language:
-
Short-container-title:IJLTEMAS
Author:
Chiamaka Daniella Okenwa. ,Omoyin Damilola. David ,Adeyinka Orelaja. ,Oladayo Tosin Akinwande
Abstract
Integration of explainable Artificial Intelligence (XAI) methodologies into compliance frameworks represents a considerable potential for augmenting fraud prevention strategies across diverse sectors. This paper explores the role of explainable AI in compliance models for fraud prevention. In highly regulated sectors like finance, healthcare, and cybersecurity, XAI helps identify abnormal behaviour and ensure regulatory compliance by offering visible and comprehensible insights into AI-driven decision-making processes. The findings indicate the extent to which XAI can improve the efficacy, interpretability, and transparency of initiatives aimed at preventing fraud. Stakeholders can comprehend judgements made by AI, spot fraudulent tendencies, and rank risk-reduction tactics using XAI methodologies. In addition, it also emphasizes how crucial interdisciplinary collaboration is to the advancement of XAI and its incorporation into compliance models for fraud detection across multiple sectors. In conclusion, XAI in compliance models plays a vital role in fraud prevention. Therefore, through the utilization of transparent and interpretable AI tools, entities can strengthen their ability to withstand fraudulent operations, build trust among stakeholders, and maintain principles within evolving regulatory systems.
Publisher
RSIS International
Reference55 articles.
1. Akindote, O. J., Abimbola Oluwatoyin Adegbite, Samuel Onimisi Dawodu, Adedolapo Omotosho, Anthony Anyanwu, & Chinedu Paschal Maduka. (2023). Comparative review of big data analytics and GIS in healthcare decision-making. World Journal of Advanced Research and Reviews, 20(3), 1293–1302. https://doi.org/10.30574/wjarr.2023.20.3.2589 2. Al-Anqoudi, Y., Al-Hamdani, A., Al-Badawi, M., & Hedjam, R. (2021). Using Machine Learning in Business Process Re-Engineering. Big Data and Cognitive Computing, 5(4), 61. https://doi.org/10.3390/bdcc5040061 3. Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Ser, J. D., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99(101805), 101805. sciencedirect. https://doi.org/10.1016/j.inffus.2023.101805 4. Antwarg, L., Miller, R. M., Shapira, B., & Rokach, L. (2021). Explaining anomalies detected by autoencoders using Shapley Additive Explanations. Expert Systems with Applications, 186, 115736. https://doi.org/10.1016/j.eswa.2021.115736 5. Arrieta, B. A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58(1), 82–115. https://arxiv.org/pdf/1910.10045.pdf
|
|