Explainable Artificial Intelligence in Job Recommendation Systems

Tran, T.H.A (2023)

This study takes an empirical approach to explainable AI in job recommendation systems (JRS). Using a real-world Kaggle's Career Builder 2012 dataset, the study compares the recommendations generated by different JRS designs and then decomposes the factors that contribute to the ranking results. Two post hoc techniques: LIME and SHAP, and the Explainable Boosting Machines (EBM) algorithm are implemented for explanation purposes. The results of the experiment confirm the potential of using inherently explainable models (EBM) to mitigate the trade-off between performance and explainability in recommendation tasks. JRSs using the EBM algorithm can perform on par with complex counterparts, using the Factorization Machines algorithm. These explainable JRS can generate both global and local explanations. JRS using post hoc analysis could retrieve most of the critical features of the original model when explaining at a local scope. Equally important, the explanations from both approaches can reflect the bias in the recommendation systems, which links to data quality in this case. Finally, the study proposes some use cases to illustrate the solution to challenges in explaining embedded features in job recommendation systems.
Tran_MA_EEMCS.pdf