Presented at NeurIPS2020 by Fernando Diaz, Brian St. Thomas, Praveen Chandar.
The evaluation and optimization of machine learning systems have largely adopted well-known performance metrics like accuracy (for classification) or squared error (for regression). While these metrics are reusable across a variety of machine learning tasks, they make strong assumptions often not observed when situated in a broader technical or sociotechnical system. This is especially true in systems that interact with large populations of humans attempting to complete a task or satisfy a need (e.g. search, recommendation, game-playing). In this tutorial, we will present methods for developing evaluation metrics grounded in what users expect of the system and they respond to system decisions. The goal of this tutorial is both to share methods for designing user-based quantitative metrics and to motivate new research into optimizing for these more structured metrics.
@inproceedings{neurips-2020-tutorial:beyond-accuracy,
author = {Praveen Chandar and Fernando Diaz and Brian {St. Thomas}},
booktitle = {Advances in Neural Information Processing Systems},
howpublished = {\url{https://github.com/pchandar/beyond-accuracy-tutorial}},
title = {Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems},
year = {2020}}
- Part 1: Introduction
- Part 2: Offline Metrics
- Part 3: Behavior-based Metrics
- Part 4: Multiple Metrics