Power to the People - The role of Humans in Interactive Machine Learning

This paper is written by Amershi, a researcher in MSA, who is kind of a leading researcher in the crossing area of ML + HCI.

The main issue that the paper addressed is the important role of end-users in interactive machine learning, which is characterized by the “rapid train-feedback-correct cycles”. The paper also summarized common problems that users introduced when applying interactive machine learning: * Users are not machines. Users may get frustrated if mechanically interact with the system (example: active learning); Users naturally don’t want to be just data labelers. * Users have bias. For example, in reinforcement learning, the reward function provided by users are often positively biased, which caused the learner to pursue short-term reward. * Users value transparency in learning systems, and transparency can help users provide better labels.

My takeaways:

The keys of a general interactive machine learning system: timing (feedback time), trust (transparency of the algorithms and the system), and user friendly interactions (avoid misleading and frustration).

Discussions:

If we say that “Why should I trust you” mainly emphasized end-users’ need to understand and trust ML models, then this paper mainly emphasized end-users’ need to efficiently develop ML models for their own needs.

Interactive machine learning is still a growing field, lacks widely accepted taxonomy and foundations. This also means that there is large space for collaboration across the fields of HCI/Vis and ML. HCI/Vis can leverage advances in ML to develop more powerful tools to users. ML also faces more practical issues brought by potential users and new opportunities to develop new frameworks that supports realistic assumptions about users.