The challenge of understanding explanations for user trust in AI: Insights from an experiment about job matching

Abstract

Artificial Intelligence (AI) is widely used in recruitment with Job Recommender Systems (JRS) being designed for job matching. However, poor matches can significantly impact individuals’ livelihoods, business profits, and organizational productivity. Thus, it is critical to study whether users trust JRS and their outputs, especially after receiving explanations for job matches. In a between-subjects, mixed-methods online experiment, we study whether varying explanations of trust violations embedded in a job match influence user trust in the mock-up JRS algorithm JobMatcher. We found that such explanations had limited effect on trust in the JobMatcher algorithm or understanding of the job match. We discuss the findings about the complexity of explanations concerning trust, the problematic role of high or low agency perception of AI for trust measures, and the need to address the moral dimension of trust.

Publication
20th IFIP TC13 International Conference on Human-Computer Interaction (INTERACT 2025)