Empirical Study on The Role of Explainable AI (XAI) in Improving Customer Trust in AI-Powered Products |
||
![]() |
![]() |
|
© 2025 by IJCTT Journal | ||
Volume-73 Issue-2 |
||
Year of Publication : 2025 | ||
Authors : Devendra Singh Parmar, Hemlatha Kaur Saran | ||
DOI : 10.14445/22312803/IJCTT-V73I2P106 |
How to Cite?
Devendra Singh Parmar, Hemlatha Kaur Saran, "Empirical Study on The Role of Explainable AI (XAI) in Improving Customer Trust in AI-Powered Products," International Journal of Computer Trends and Technology, vol. 73, no. 2, pp. 48-57, 2025. Crossref, https://doi.org/10.14445/22312803/IJCTT-V73I2P106
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a crucial element in fostering customer trust in AI powered products. As AI systems become increasingly embedded in daily life, the need for transparency, interpretability, and fairness in decision-making processes has gained prominence. This empirical study explores the role of XAI in enhancing customer trust across various industries, including healthcare, finance, and retail. By providing understandable explanations of AI decisions, XAI enables users to comprehend AI behavior, thus reducing skepticism and promoting acceptance. The research examines secondary data to analyze the correlation between XAI implementation and customer trust levels. Additionally, it discusses the challenges and opportunities in measuring trust and the emerging trends and future trajectories of XAI in AI product development. Key findings suggest that the integration of XAI significantly improves perceived control and user understanding, which in turn fosters a more positive relationship with AI systems. Despite challenges such as technological complexities and the need for standardized solutions, XAI holds the potential to build a more transparent and ethical AI landscape. This research emphasizes the importance of continued innovation in XAI technologies to address trust related concerns and facilitate broader adoption of AI-driven products. Future research should focus on developing standardized, universally accepted frameworks for XAI implementation to further enhance trust in AI applications.
Keywords
Artificial Intelligence, Customer Trust, Explainable AI, Interpretability, Transparency.
Reference
[1] Rajat Kumar Behera, Pradip Kumar Bala, and Nripendra P. Rana, “Creation of Sustainable Growth with Explainable Artificial Intelligence: An Empirical Insight from Consumer Packaged Goods Retailers,” Journal of Cleaner Production, vol. 399, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Rajat Kumar Behera, Pradip Kumar Bala, and Nripendra P. Rana, “Creation of Sustainable Growth with Explainable Artificial Intelligence: An Empirical Insight from Consumer Packaged Goods Firms,” Elsevier, 2023.
[Google Scholar]
[3] Sonal Trivedi, Explainable Artificial Intelligence in Consumer-Centric Business Practices and Approaches, AI Impacts in Digital Consumer Behavior, IGI Global, pp. 1-372, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Meenu Chaudhary et al., Introduction to Explainable AI (XAI) in E-Commerce, Role of Explainable Artificial Intelligence in E Commerce, Springer, pp. 1-15, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[5] R. Kiarash Sadeghi et al., “Explainable Artificial Intelligence and Agile Decision-making in Supply Chain Cyber Resilience,” Decision Support Systems, vol. 180, p. 1-10, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Ezekiel Bernardo, and Rosemary Seva, “Affective Design Analysis of Explainable Artificial Intelligence (XAI): A User-centric Perspective,” Informatics, vol. 10, no. 1, pp. 1-24, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Donghee Shin, “The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI,” International Journal of Human-Computer Studies, vol. 146, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Basim Mahbooba et al., “Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems using Decision Tree Model,” Complexity, vol. 2021, pp. 1-11, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[9] R. Machlev et al., “Explainable Artificial Intelligence (XAI) Techniques for Energy and Power Systems: Review, Challenges and Opportunities,” Energy and AI, vol. 9, pp. 1-13, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Liangru Yu, and Yi Li, “Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort,” Behavioral Science, vol. 12, no. 5, p. 1-17, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Omobolaji Olufunmilayo Olateju et al., “Exploring the Concept of Explainable AI and Developing Information Governance Standards for Enhancing Trust and Transparency in Handling Customer Data,” Journal of Engineering Research and Reports, vol. 26, No. 7, pp. 244-268, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Nitin Rane, Saurabh Choudhary, and Jayesh Rane, “Explainable Artificial Intelligence (XAI) Approaches for Transparency and Accountability in Financial Decision-Making,” SSRN Electronic Journal, pp. 1-17, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Cem Ozkurt, “Transparency in Decision-making: the Role of Explainable AI (XAI) in Customer Churn Analysis,” Information Technology in Economics and Business, vol. 2, no. 1, pp. 1-11, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Rajesh Soundararajan, and V.M. Shenbagaraman, “Enhancing Financial Decision-making through Explainable AI and Blockchain Integration: Improving Transparency and Trust in Predictive Models,” Educational Administration: Theory and Practice, vol. 30, no. 4, pp. 9341-9351, 2024.
[CrossRef] [Publisher Link]
[15] Laith T. Khrais, “Role of Artificial Intelligence in Shaping Consumer Demand in E-commerce,” Future Internet, vol. 12, no. 12, p. 1-14, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Anirban Adak, Biswajeet Pradhan, and Nagesh Shukla, “Sentiment Analysis of Customer Reviews of Food Delivery Services using Deep Learning and Explainable Artificial Intelligence: Systematic Review,” Foods, vol. 11, no. 10, pp. 1-16, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Peter Hase, and Mohit Bansal, “Evaluating Explainable AI: Which Algorithmic Explanations help users Predict Model Behavior?,” arXiv, pp. 1-13, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Katharina Weitz et al., “Do you Trust me? Increasing User-trust by Integrating Virtual Agents in Explainable AI Interaction Design,” Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Paris France, pp. 7-9, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Weisi Guo, “Explainable Artificial Intelligence for 6G: Improving Trust between Human and Machine,” IEEE Communications Magazine, vol. 58, no. 6, pp. 39-45, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Q. Vera Liao, and Kush R. Varshney, “Human-centered Explainable AI (XAI): From Algorithms to User Experiences,” arXiv, pp. 1 18, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Robert R. Hoffman et al., “Measures for Explainable AI: Explanation Goodness, User Satisfaction, Mental Models, Curiosity, Trust, and Human-AI Performance,” Frontiers in Computer Science, vol. 5, pp. 1-15, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Andrew Silva et al., “Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of XAI on Human-agent Interaction,” International Journal of Human-Computer Interaction, vol. 39, no. 7, pp. 1390-1404, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Delphine Ribes et al., “Trust Indicators and Explainable AI: A Study on User Perceptions,” Human-Computer Interaction–INTERACT 2021: Proceedings, Part II 8th IFIP TC 13 International Conference, Bari, Italy, pp. 662-671, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Donghee Shin, “User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability,” Journal of Broadcasting and Electronic Media, vol. 64, no. 4, pp. 541-565, 2020.
[CrossRef] [Google Scholar] [Publisher Link]