编辑: 鱼饵虫 2019-07-11
arXiv:1706.

02744v2 [stat.ML]

21 Jan

2018 Avoiding Discrimination through Causal Reasoning Niki Kilbertus?? nkilbertus@tue.mpg.de Mateo Rojas-Carulla?? mrojas@tue.mpg.de Giambattista Parascandolo?§ gparascandolo@tue.mpg.de Moritz Hardt? hardt@berkeley.edu Dominik Janzing? janzing@tue.mpg.de Bernhard Sch¨ olkopf? bs@tue.mpg.de ? Max Planck Institute for Intelligent Systems ? University of Cambridge § Max Planck ETH Center for Learning Systems ? University of California, Berkeley Abstract Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observa- tional: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively. Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This view- point shifts attention from What is the right fairness criterion? to What do we want to assume about our model of the causal data generating process? Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a mat- ter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

1 Introduction As machine learning progresses rapidly, its societal impact has come under scrutiny. An important concern is potential discrimination based on protected attributes such as gender, race, or religion. Since learned predictors and risk scores increasingly support or even replace human judgment, there is an opportunity to formalize what harmful discrimination means and to design algorithms that avoid it. However, researchers have found it dif?cult to agree on a single measure of discrimination. As of now, there are several competing approaches, representing different opinions and striking different trade-offs. Most of the proposed fairness criteria are observational: They depend only on the joint distribution of predictor R, protected attribute A, features X, and outcome Y. For example, the natural requirement that R and A must be statistically independent is referred to as demographic parity. Some approaches transform the features X to obfuscate the information they contain about A [1]. The recently proposed equalized odds constraint [2] demands that the predictor R and the attribute A be independent conditional on the actual outcome Y. All three are examples of observational approaches. A growing line of work points at the insuf?ciency of existing de?nitions. Hardt, Price and Srebro [2] construct two scenarios with intuitively different social interpretations that admit identical joint dis- 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. tributions over (R, A, Y, X). Thus, no observational criterion can distinguish them. While there are non-observational criteria, notably the early work on individual fairness [3], these have not yet gained traction. So, it might appear that the community has reached an impasse. 1.1 Our contributions We assay the problem of discrimination in machine learning in the language of causal reasoning. This viewpoint supports several contributions: ? Revisiting the two scenarios proposed in [2], we articulate a natural causal criterion that formally distinguishes them. In particular, we show that observational criteria are unable to determine if a protected attribute has direct causal in?uence on the predictor that is not mitigated by resolving variables. ? We point out subtleties in fair decision making that arise naturally from a causal perspec- tive, but have gone widely overlooked in the past. Speci?cally, we formally argue for the need to distinguish between the underlying concept behind a protected attribute, such as race or gender, and its proxies available to the algorithm, such as visual features or name. ? We introduce and discuss two natural causal criteria centered around the notion of inter- ventions (relative to a causal graph) to formally describe speci?c forms of discrimination. ? Finally, we initiate the study of algorithms that avoid these forms of discrimination. Under certain linearity assumptions about the underlying causal model generating the data, an algorithm to remove a speci?c kind of discrimination leads to a simple and natural heuristic. At a higher level, our work proposes a shift from trying to ?nd a single statistical fairness criterion to arguing about properties of the data and which assumptions about the generating process are justi?ed. Causality provides a ?exible framework for organizing such assumptions. 1.2 Related work Demographic parity and its variants have been discussed in numerous papers, e.g., [1, 4C6]. While demographic parity is easy to work with, the authors of [3] already highlighted its insuf?ciency as a fairness constraint. In an attempt to remedy the shortcomings of demographic parity [2] proposed two notions, equal opportunity and equal odds, that were also considered in [7]. A review of various fairness criteria can be found in [8], where they are discussed in the context of criminal justice. In [9, 10] it has been shown that imperfect predictors cannot simultaneously satisfy equal odds and calibration unless the groups have identical base rates, i.e. rates of positive outcomes. A starting point for our investigation is the unidenti?ability result of [2]. It shows that observed- vational criteria are too weak to distinguish two intuitively very different scenarios. However, the work does not provide a formal mechanism to articulate why and how these scenarios should be considered different. Inspired by Pearl'

下载(注:源文件不在本站服务器,都将跳转到源网站下载)
备用下载
发帖评论
相关话题
发布一个新话题