Georg Dedikov
Published: 2023-05-16
Total Pages: 157
Get eBook
Master's Thesis from the year 2023 in the subject Computer Science - SEO, Search Engine Optimization, grade: 1,0, University of Regensburg (Professur für Wirtschaftsinformatik, insb. Internet Business & Digitale Soziale Medien), language: English, abstract: This thesis presents a toolkit of 17 user experience (UX) principles, which are categorized according to their relevance towards Explainable AI (XAI). The goal of Explainable AI has been widely associated in literature with dimensions of comprehensibility, usefulness, trust, and acceptance. Moreover, authors in academia postulate that research should rather focus on the development of holistic explanation interfaces instead of single visual explanations. Consequently, the focus of XAI research should be more on potential users and their needs, rather than purely technical aspects of XAI methods. Considering these three impediments, the author of this thesis derives the assumption to bring valuable insights from the research area of User Interface (UI) and User Experience design into XAI research. Basically, UX is concerned with the design and evaluation of pragmatic and hedonic aspects of a user’s interaction with a system in some context. These principles are taken into account in the subsequent prototyping of a custom XAI system called Brain Tumor Assistant (BTA). Here, a pre-trained EfficientNetB0 is used as a Convolutional Neural Network that can divide x-ray images of a human brain into four classes with an overall accuracy of 98%. To generate factual explanations, Local Interpretable Model-agnostic Explanations are subsequently applied as an XAI method. The following evaluation of the BTA is based on the so-called User Experience Questionnaire (UEQ) according to Laugwitz et al. (2008), whereby single items of the questionnaire are adapted to the specific context of XAI. Quantitative data from a study with 50 participants in each control and treatment group is used to present a standardized way of quantifying the dimensions of Usability and UX specifically for XAI systems. Furthermore, through an A/B test, evidence is presented that visual explanations have a significant (α=0.05) positive effect on the dimensions of attractiveness, usefulness, controllability, and trustworthiness. In summary, this thesis proves that explanations in computer vision not only have a significantly positive effect on trustworthiness, but also on other dimensions.