Michael Kim (Stanford University)
Wednesday, 12.12.2018, 12:30
As algorithmic prediction systems have become more widespread, so too have concerns that these systems may be discriminatory against groups of people protected by laws and ethics. We present a recent line of work that takes a complexity theoretic perspective towards combating discrimination in prediction systems. We'll focus on fair classification within the versatile framework of Dwork et al. [ITCS'12], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike earlier work, we do not assume that the entire metric is known to the learning algorithm; instead, the learner has access to a small random sample. We discuss a new notion, called *multicalibration*, which aims to provide strong fairness guarantees from a small sample. Multicalibration is parameterized by a rich collection C of (possibly overlapping) "subpopulations" of individuals. At a high level, multicalibration guarantees that two subpopulations that are similar, according to the task at hand, must be treated similarly, so long as these subpopulations are identified by the class C.
Talk will draw from joint works with Amirata Ghorbani, Úrsula Hébert-Johnson, Omer Reingold, Guy N. Rothblum, and James Zou and will assume no prior background on algorithmic fairness.