Skip to Main content Skip to Navigation
New interface
Book sections

Evaluation of interactive machine learning systems

Abstract : The evaluation of interactive machine learning systems remains a difficult task. These systems learn from and adapt to the human, but at the same time, the human receives feedback and adapts to the system. Getting a clear understanding of these subtle mechanisms of co-operation and co-adaptation is challenging. In this chapter, we report on our experience in designing and evaluating various interactive machine learning applications from different domains. We argue for coupling two types of validation: algorithm-centred analysis, to study the computational behaviour of the system; and human-centred evaluation, to observe the utility and effectiveness of the application for end-users. We use a visual analytics application for guided search, built using an interactive evolutionary approach, as an exemplar of our work. Our observation is that human-centred design and evaluation complement algorithmic analysis, and can play an important role in addressing the "black-box" effect of machine learning. Finally, we discuss research opportunities that require human-computer interaction methodologies, in order to support both the visible and hidden roles that humans play in interactive machine learning.
Complete list of metadata
Contributor : Migration ProdInra Connect in order to contact the contributor
Submitted on : Friday, June 5, 2020 - 8:01:54 AM
Last modification on : Friday, August 5, 2022 - 2:38:10 PM

Links full text



Nadia Boukhelifa, Anastasia Bezerianos, Evelyne Lutton. Evaluation of interactive machine learning systems. Human and Machine Learning, SPRINGER, pp.20, 2018, Human-Computer Interaction Series, 978-3-319-90403-0; 978-3-319-90402-3. ⟨10.1007/978-3-319-90403-0_17⟩. ⟨hal-02791670⟩



Record views