- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
Delayed Impact of Fair Machine Learning
展开查看详情
1 . Delayed Impact of Fair Machine Learning Lydia T. Liu∗ Sarah Dean∗ Esther Rolf∗ Max Simchowitz∗ Moritz Hardt∗ April 10, 2018 arXiv:1803.04383v2 [cs.LG] 7 Apr 2018 Abstract Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evalua- tion of fairness criteria, suggesting a range of new challenges and trade-offs. 1 Introduction Machine learning commonly considers static objectives defined on a snapshot of the population at one instant in time; consequential decisions, in contrast, reshape the population over time. Lending practices, for example, can shift the distribution of debt and wealth in the population. Job advertisements allocate opportunity. School admissions shape the level of education in a community. Existing scholarship on fairness in automated decision-making criticizes unconstrained machine learning for its potential to harm historically underrepresented or disadvantaged groups in the population [Executive Office of the President, 2016, Barocas and Selbst, 2016]. Consequently, a variety of fairness criteria have been proposed as constraints on standard learning objectives. Even though, in each case, these constraints are clearly intended to protect the disadvantaged group by an appeal to intuition, a rigorous argument to that effect is often lacking. In this work, we formally examine under what circumstances fairness criteria do indeed promote the long-term well-being of disadvantaged groups measured in terms of a temporal variable of interest. Going beyond the standard classification setting, we introduce a one-step feedback model of decision-making that exposes how decisions change the underlying population over time. ∗ Department of Electrical Engineering and Computer Sciences, University of California, Berkeley 1
2 . Our running example is a hypothetical lending scenario. There are two groups in the population with features described by a summary statistic, such as a credit score, whose distribution differs between the two groups. The bank can choose thresholds for each group at which loans are offered. While group-dependent thresholds may face legal challenges [Ross and Yinger, 2006], they are generally inevitable for some of the criteria we examine. The impact of a lending decision has multiple facets. A default event not only diminishes profit for the bank, it also worsens the financial situation of the borrower as reflected in a subsequent decline in credit score. A successful lending outcome leads to profit for the bank and also to an increase in credit score for the borrower. When thinking of one of the two groups as disadvantaged, it makes sense to ask what lending policies (choices of thresholds) lead to an expected improvement in the score distribution within that group. An unconstrained bank would maximize profit, choosing thresholds that meet a break- even point above which it is profitable to give out loans. One frequently proposed fairness criterion, sometimes called demographic parity, requires the bank to lend to both groups at an equal rate. Subject to this requirement the bank would continue to maximize profit to the extent possible. Another criterion, originally called equality of opportunity, equalizes the true positive rates between the two groups, thus requiring the bank to lend in both groups at an equal rate among individuals who repay their loan. Other criteria are natural, but for clarity we restrict our attention to these three. Do these fairness criteria benefit the disadvantaged group? When do they show a clear advantage over unconstrained classification? Under what circumstances does profit maximization work in the interest of the individual? These are important questions that we begin to address in this work. 1.1 Contributions We introduce a one-step feedback model that allows us to quantify the long-term impact of classi- fication on different groups in the population. We represent each of the two groups A and B by a score distribution π A and π B , respectively. The support of these distributions is a finite set X cor- responding to the possible values that the score can assume. We think of the score as highlighting one variable of interest in a specific domain such that higher score values correspond to a higher probability of a positive outcome. An institution chooses selection policies τ A , τ B : X → [0, 1] that assign to each value in X a number representing the rate of selection for that value. In our example, these policies specify the lending rate at a given credit score within a given group. The institution will always maximize their utility (defined formally later) subject to either (a) no constraint, or (b) equality of selection rates, or (c) equality of true positive rates. We assume the availability of a function ∆ : X → R such that ∆(x) provides the expected change in score for a selected individual at score x. The central quantity we study is the expected difference in the mean score in group j ∈ {A, B} that results from an institutions policy, ∆µj defined formally in Equation (2). When modeling the problem, the expected mean difference can also absorb external factors such as “reversion to the mean” so long as they are mean-preserving. Qualitatively, we distinguish between long-term improvement (∆µj > 0), stagnation (∆µj = 0), and decline (∆µj < 0). Our findings can be summarized as follows: 1. Both fairness criteria (equal selection rates, equal true positive rates) can lead to all possible outcomes (improvement, stagnation, and decline) in natural parameter regimes. We provide a complete characterization of when each criterion leads to each outcome in Section 3. 2
3 . • There are a class of settings where equal selection rates cause decline, whereas equal true positive rates do not (Corollary 3.5), • Under a mild assumption, the institution’s optimal unconstrained selection policy can never lead to decline (Proposition 3.1). 2. We introduce the notion of an outcome curve (Figure 1) which succinctly describes the dif- ferent regimes in which one criterion is preferable over the others. 3. We perform experiments on FICO credit score data from 2003 and show that under various models of bank utility and score change, the outcomes of applying fairness criteria are in line with our theoretical predictions. 4. We discuss how certain types of measurement error (e.g., the bank underestimating the repay- ment ability of the disadvantaged group) affect our comparison. We find that measurement error narrows the regime in which fairness criteria cause decline, suggesting that measurement should be a factor when motivating these criteria. 5. We consider alternatives to hard fairness constraints. • We evaluate the optimization problem where fairness criterion is a regularization term in the objective. Qualitatively, this leads to the same findings. • We discuss the possibility of optimizing for group score improvement ∆µj directly subject to institution utility constraints. The resulting solution provides an interesting possible alternative to existing fairness criteria. We focus on the impact of a selection policy over a single epoch. The motivation is that the designer of a system usually has an understanding of the time horizon after which the system is evaluated and possibly redesigned. Formally, nothing prevents us from repeatedly applying our model and tracing changes over multiple epochs. In reality, however, it is plausible that over greater time periods, economic background variables might dominate the effect of selection. Reflecting on our findings, we argue that careful temporal modeling is necessary in order to accurately evaluate the impact of different fairness criteria on the population. Moreover, an under- standing of measurement error is important in assessing the advantages of fairness criteria relative to unconstrained selection. Finally, the nuances of our characterization underline how intuition may be a poor guide in judging the long-term impact of fairness constraints. 1.2 Related work Recent work by Hu and Chen [2018] considers a model for long-term outcomes and fairness in the labor market. They propose imposing the demographic parity constraint in a temporary labor market in order to provably achieve an equitable long-term equilibrium in the permanent labor market, reminiscent of economic arguments for affirmative action [Foster and Vohra, 1992]. The equilibrium analysis of the labor market dynamics model allows for specific conclusions relating fairness criteria to long term outcomes. Our general framework is complementary to this type of domain specific approach. Fuster et al. [2017] consider the problem of fairness in credit markets from a different perspective. Their goal is to study the effect of machine learning on interest rates in different groups at an equilibrium, under a static model without feedback. 3
4 . Ensign et al. [2017] consider feedback loops in predictive policing, where the police more heavily monitor high crime neighborhoods, thus further increasing the measured number of crimes in those neighborhoods. While the work addresses an important temporal phenomenon using the theory of urns, it is rather different from our one-step feedback model both conceptually and technically. Demographic parity and its related formulations have been considered in numerous papers [e.g. Calders et al., 2009, Zafar et al., 2017]. Hardt et al. [2016] introduced the equality of opportunity constraint that we consider and demonstrated limitations of a broad class of criteria. Kleinberg et al. [2017] and Chouldechova [2016] point out the tension between “calibration by group” and equal true/false positive rates. These trade-offs carry over to some extent to the case where we only equalize true positive rates [Pleiss et al., 2017]. A growing literature on fairness in the “bandits” setting of learning [see Joseph et al., 2016, et sequelae] deals with online decision making that ought not to be confused with our one-step feedback setting. Finally, there has been much work in the social sciences on analyzing the effect of affirmative action [see e.g., Keith et al., 1985, Kalev et al., 2006]. 1.3 Discussion In this paper, we advocate for a view toward long-term outcomes in the discussion of “fair” machine learning. We argue that without a careful model of delayed outcomes, we cannot foresee the impact a fairness criterion would have if enforced as a constraint on a classification system. However, if such an accurate outcome model is available, we show that there are more direct ways to optimize for positive outcomes than via existing fairness criteria. We outline such an outcome-based solution in Section 4.3. Specifically, in the credit setting, the outcome-based solution corresponds to giving out more loans to the protected group in a way that reduces profit for the bank compared to unconstrained profit maximization, but avoids loaning to those who are unlikely to benefit, resulting in a maximally improved group average credit score. The extent to which such a solution could form the basis of successful regulation depends on the accuracy of the available outcome model. This raises the question if our model of outcomes is rich enough to faithfully capture realistic phenomena. By focusing on the impact that selection has on individuals at a given score, we model the effects for those not selected as zero-mean. For example, not getting a loan in our model has no negative effect on the credit score of an individual.1 This does not mean that wrongful rejection (i.e., a false negative) has no visible manifestation in our model. If a classifier has a higher false negative rate in one group than in another, we expect the classifier to increase the disparity between the two groups (under natural assumptions). In other words, in our outcome- based model, the harm of denied opportunity manifests as growing disparity between the groups. The cost of a false negative could also be incorporated directly into the outcome-based model by a simple modification (see Footnote 2). This may be fitting in some applications where the immediate impact of a false negative to the individual is not zero-mean, but significantly reduces their future success probability. In essence, the formalism we propose requires us to understand the two-variable causal mecha- nism that translates decisions to outcomes. This can be seen as relaxing the requirements compared with recent work on avoiding discrimination through causal reasoning that often required stronger assumptions [Kusner et al., 2017, Nabi and Shpitser, 2017, Kilbertus et al., 2017]. In particular, these works required knowledge of how sensitive attributes (such as gender, race, or proxies thereof) 1 In reality, a denied credit inquiry may lower one’s credit score, but the effect is small compared to a default event. 4
5 .causally relate to various other variables in the data. Our model avoids the delicate modeling step involving the sensitive attribute, and instead focuses on an arguably more tangible economic mech- anism. Nonetheless, depending on the application, such an understanding might necessitate greater domain knowledge and additional research into the specifics of the application. This is consistent with much scholarship that points to the context-sensitive nature of fairness in machine learning. 2 Problem Setting We consider two groups A and B, which comprise a gA and gB = 1 − gA fraction of the total population, and an institution which makes a binary decision for each individual in each group, called selection. Individuals in each group are assigned scores in X := [C], and the scores for group j ∈ {A, B} are distributed according π j ∈ SimplexC−1 . The institution selects a policy τ := (τ A , τ B ) ∈ [0, 1]2C , where τ j (x) corresponds to the probability the institution selects an individual in group j with score x. One should think of a score as an abstract quantity which summarizes how well an individual is suited to being selected; examples are provided at the end of this section. We assume that the institution is utility-maximizing, but may impose certain constraints to ensure that the policy τ is fair, in a sense described in Section 2.2. We assume that there exists a function u : C → R, such that the institution’s expected utility for a policy τ is given by U(τ ) = j∈{A,B} gj x∈X τ j (x)π j (x)u(x). (1) Novel to this work, we focus on the effect of the selection policy τ on the groups A and B. We quantify these outcomes in terms of an average effect that a policy τ j has on group j. Formally, for a function ∆(x) : X → R, we define the average change of the mean score µj for group j ∆µj (τ ) := x∈X π j (x)τ j (x)∆(x) . (2) We remark that many of our results also go through if ∆µj (τ ) simply refers to an abstract change in well-being, not necessarily a change in the mean score. Furthermore, it is possible to modify the definition of ∆µj (τ ) such that it directly considers outcomes of those who are not selected.2 Lastly, we assume that the success of an individual is independent of their group given the score; that is, the score summarizes all relevant information about the success event, so there exists a function ρ : X → [0, 1] such that individuals of score x succeed with probability ρ(x). We now introduce the specific domain of credit scores as a running example in the rest of the paper, after which we present two more examples showing the general applicability of our formulation to many domains. Example 2.1 (Credit scores). In the setting of loans, scores x ∈ [C] represent credit scores, and the bank serves as the institution. The bank chooses to grant or refuse loans to individuals according to a policy τ . Both bank and personal utilities are given as functions of loan repayment, and 2 If we consider functions ∆p (x) : X → R and ∆n (x) : X → R to represent the average effect of selection and non-selection respectively, then ∆µj (τ ) := x∈X π j (x) (τ j (x)∆p (x) + (1 − τ j (x))∆n (x)). This model corresponds to replacing ∆(x) in the original outcome definition with ∆p (x) − ∆n (x), and adding a offset x∈X π j (x)∆n (x). Under the assumption that ∆p (x) − ∆n (x) increases in x, this model gives rise to outcomes curves resembling those in Figure 1 up to vertical translation. All presented results hold unchanged under the further assumption that ∆µ(β MaxUtil ) ≥ 0. 5
6 .therefore depend on the success probabilities ρ(x), representing the probability that any individual with credit score x can repay a loan within a fixed time frame. The expected utility to the bank is given by the expected return from a loan, which can be modeled as an affine function of ρ(x): u(x) = u+ ρ(x) + u− (1 − ρ(x)), where u+ denotes the profit when loans are repaid and u− the loss when they are defaulted on. Individual outcomes of being granted a loan are based on whether or not an individual repays the loan, and a simple model for ∆(x) may also be affine in ρ(x): ∆(x) = c+ ρ(x) + c− (1 − ρ(x)), modified accordingly at boundary states. The constant c+ denotes the gain in credit score if loans are repaid and c− is the score penalty in case of default. Example 2.2 (Advertising). A second illustrative example is given by the case of advertising agencies making decisions about which groups to target. An individual with product interest score x responds positively to an ad with probability ρ(x). The ad agency experiences utility u(x) related to click-through rates, which increases with ρ(x). Individuals who see the ad but are uninterested may react negatively (becoming less interested in the product), and ∆(x) encodes the interest change. If the product is a positive good like education or employment opportunities, interest can correspond to well-being. Thus the advertising agency’s incentives to only show ads to individuals with extremely high interest may leave behind groups whose interest is lower on average. A related historical example occurred in advertisements for computers in the 1980s, where male consumers were targeted over female consumers, arguably contributing to the current gender gap in computing. Example 2.3 (College Admissions). The scenario of college admissions or scholarship allotments can also be considered within our framework. Colleges may select certain applicants for acceptance according to a score x, which could be thought encode a “college preparedness” measure. The stu- dents who are admitted might “succeed” (this could be interpreted as graduating, graduating with honors, finding a job placement, etc.) with some probability ρ(x) depending on their preparedness. The college might experience a utility u(x) corresponding to alumni donations, or positive rating when a student succeeds; they might also show a drop in rating or a loss of invested scholarship money when a student is unsuccessful. The student’s success in college will affect their later success, which could be modeled generally by ∆(x). In this scenario, it is challenging to ensure that a single summary statistic x captures enough information about a student; it may be more appropriate to consider x as a vector as well as more complex forms of ρ(x). While a variety of applications are modeled faithfully within our framework, there are limitations to the accuracy with which real-life phenomenon can be measured by strictly binary decisions and success probabilities. Such binary rules are necessary for the definition and execution of existing fairness criteria, (see Sec. 2.2) and as we will see, even modeling these facets of decision making as binary allows for complex and interesting behavior. 2.1 The Outcome Curve We now introduce important outcome regimes, stated in terms of the change in average group score. A policy (τ A , τ B ) is said to cause active harm to group j if ∆µj (τ j ) < 0, stagnation if ∆µj (τ j ) = 0, and improvement if ∆µj (τ j ) > 0. Under our model, MaxUtil policies can be chosen in a standard fashion which applies the same threshold τ MaxUtil for both groups, and is agnostic to the distributions π A and π B . Hence, if we define ∆µMaxUtil j := ∆µj (τ MaxUtil ) (3) 6
7 . OUTCOME CURVE Relative Improvement Relative Harm 0 Selection Rate 1 Active Harm (b) Selection Rate 0 0 * 1 1 Selection Rate (a) (c) Figure 1: The above figure shows the outcome curve. The horizontal axis represents the selection rate for the population; the vertical axis represents the mean change in score. (a) depicts the full spectrum of outcome regimes, and colors indicate regions of active harm, relative harm, and no harm. In (b): a group that has much potential for gain, in (c): a group that has no potential for gain. we say that a policy causes relative harm to group j if ∆µj (τ j ) < ∆µMaxUtilj , and relative im- provement if ∆µj (τ j ) > ∆µj MaxUtil . In particular, we focus on these outcomes for a disadvantaged group, and consider whether imposing a fairness constraint improves their outcomes relative to the MaxUtil strategy. From this point forward, we take A to be disadvantaged or protected group. Figure 1 displays the important outcome regimes in terms of selection rates βj := x∈X π j (x)τ j (x). This succinct characterization is possible when considering decision rules based on (possibly ran- domized) score thresholding, in which all individuals with scores above a threshold are selected. In Section 5, we justify the restriction to such threshold policies by showing it preserves optimality. In Section 5.1, we show that the outcome curve is concave, thus implying that it takes the shape depicted in Figure 1. To explicitly connect selection rates to decision policies, we define the rate function rπ (τ j ) which returns the proportion of group j selected by the policy. We show that this function is invertible for a suitable class of threshold policies, and in fact the outcome curve is precisely the graph of the map from selection rate to outcome β → ∆µA (rπ −1 (β)). Next, we define A the values of β that mark boundaries of the outcome regions. Definition 2.1 (Selection rates of interest). Given the protected group A, the following selection rates are of interest in distinguishing between qualitatively different classes of outcomes (Figure 1). We define β MaxUtil as the selection rate for A under MaxUtil; β0 as the harm threshold, such that ∆µA (rπ−1 (β )) = 0; β ∗ as the selection rate such that ∆µ is maximized; β as the outcome- A 0 A complement of the MaxUtil selection rate, ∆µA rπ −1 (β)) = ∆µ (r −1 (β MaxUtil )) with β > β MaxUtil . A A πA 7
8 .2.2 Decision Rules and Fairness Criteria We will consider policies that maximize the institution’s total expected utility, potentially subject to a constraint: τ ∈ C ∈ [0, 1]2C which enforces some notion of “fairness”. Formally, the institution selects τ∗ ∈ argmax U(τ ) s.t. τ ∈ C. We consider the three following constraints: Definition 2.2 (Fairness criteria). The maximum utility (MaxUtil) policy corresponds to the null- constraint C = [0, 1]2C , so that the institution is free to focus solely on utility. The demographic parity (DemParity) policy results in equal selection rates between both groups. Formally, the constraint is C = (τ A , τ B ) : x∈X π A (x)τ A = x∈X π B (x)τ B . The equal opportunity (EqOpt) policy results in equal true positive rates (TPR) between both group, where TPR is defined as π (x)ρ(x)τ (x) TPRj (τ ) := x∈X j πj (x)ρ(x) . EqOpt ensures that the conditional probability of selection given x∈X that the individual will be successful is independent of the population, formally enforced by the constraint C = {(τ A , τ B ) : TPRA (τ A ) = TPRB (τ B )} . Just as the expected outcome ∆µ can be expressed in terms of selection rate for threshold policies, so can the total utility U. In the unconstrained cause, U varies independently over the selection rates for group A and B; however, in the presence of fairness constraints the selection rate for one group determines the allowable selection rate for the other. The selection rates must be equal for DemParity, but for EqOpt we can define a transfer function, G(A→B) , which for every loan rate β in group A gives the loan rate in group B that has the same true positive rate. Therefore, when considering threshold policies, decision rules amount to maximizing functions of single parameters. This idea is expressed in Figure 2, and underpins the results to follow. 3 Results In order to clearly characterize the outcome of applying fairness constraints, we make the following assumption. Assumption 1 (Institution utilities). The institution’s individual utility function is more stringent than the expected score changes, u(x) > 0 =⇒ ∆(x) > 0. (For the linear form presented in Example 2.1, uu− + < cc− + is necessary and sufficient.) This simplifying assumption quantifies the intuitive notion that institutions take a greater risk by accepting than the individual does by applying. For example, in the credit setting, a bank loses the amount loaned in the case of a default, but makes only interest in case of a payback. Using Assumption 1, we can restrict the position of MaxUtil on the outcome curve in the following sense. Proposition 3.1 (MaxUtil does not cause active harm). Under Assumption 1, 0 ≤ ∆µMaxUtil ≤ ∆µ∗ . We direct the reader to Appendix C for the proof of the above proposition, and all subsequent results presented in this section. The results are corollaries to theorems presented in Section 6. 3.1 Prospects and Pitfalls of Fairness Criteria We begin by characterizing general settings under which fairness criteria act to improve outcomes over unconstrained MaxUtil strategies. For this result, we will assume that group A is disadvantaged 8
9 . 1 0 MU DP EO 1 0 Selection Rate Figure 2: Both outcomes ∆µ and institution utilities U can be plotted as a function of selection rate for one group. The maxima of the utility curves determine the selection rates resulting from various decision rules. in the sense that the MaxUtil acceptance rate for B is large compared to relevant acceptance rates for A. Corollary 3.2 (Fairness Criteria can cause Relative Improvement). (a) Under the assumption that βAMaxUtil < β and βBMaxUtil > βAMaxUtil , there exist population proportions g0 < g1 < 1 such that, for DemParity all gA ∈ [g0 , g1 ], βAMaxUtil < βA < β. That is, DemParity causes relative improvement. (b) Under the assumption that there exist βAMaxUtil < β < β < β such that βBMaxUtil > G(A→B) (β), G(A→B) (β ), there exist population proportions g2 < g3 < 1 such that, for all gA ∈ EqOpt [g2 , g3 ], βAMaxUtil < βA < β. That is, EqOpt causes relative improvement. This result gives the conditions under which we can guarantee the existence of settings in which fairness criteria cause improvement relative to MaxUtil. Relying on machinery proved in Section 6, the result follows from comparing the position of optima on the utility curve to the outcome curve. Figure 2 displays a illustrative example of both the outcome curve and the institutions’ utility U as a function of the selection rates in group A. In the utility function (1), the contributions of each group are weighted by their population proportions gj , and thus the resulting selection rates are sensitive to these proportions. As we see in the remainder of this section, fairness criteria can achieve nearly any position along the outcome curve under the right conditions. This fact comes from the potential mismatch between the outcomes, controlled by ∆, and the institution’s utility u. The next theorem implies that DemParity can be bad for long term well-being of the protected group by being over-generous, under the mild assumption that ∆µA (βBMaxUtil ) < 0: Corollary 3.3 (DemParity can cause harm by being over-eager). Fix a selection rate β. Assume that βBMaxUtil > β > βAMaxUtil . Then, there exists a population proportion g0 such that, for all DemParity gA ∈ [0, g0 ], βA > β. In particular, when β = β0 , DemParity causes active harm, and when β = β, DemParity causes relative harm. 9
10 . The assumption ∆µA (βBMaxUtil ) < 0 implies that a policy which selects individuals from group A at the selection rate that MaxUtil would have used for group B necessarily lowers average score in A. This is one natural notion of protected group A’s ‘disadvantage’ relative to group B. In this case, DemParity penalizes the scores of group A even more than a naive MaxUtil policy, as long as group proportion gA is small enough. Again, small gA is another notion of group disadvantage. Using credit scores as an example, Corollary 3.3 tells us that an overly aggressive fairness criterion will give too many loans to people in a protected group who cannot pay them back, hurting the group’s credit scores on average. In the following theorem, we show that an analogous result holds for EqOpt. Corollary 3.4 (EqOpt can cause harm by being over-eager). Suppose that βBMaxUtil > G(A→B) (β) and β > βAMaxUtil . Then, there exists a population proportion g0 such that, for all gA ∈ [0, g0 ], EqOpt βA > β. In particular, when β = β0 , EqOpt causes active harm, and when β = β, EqOpt causes relative harm. We remark that in Corollary 3.4, we rely on the transfer function, G(A→B) , which for every loan rate β in group A gives the loan rate in group B that has the same true positive rate. Notice that if G(A→B) were the identity function, Corollary 3.3 and Corollary 3.4 would be exactly the same. Indeed, our framework (detailed in Section 6 and Appendix B) unifies the analyses for a large class of fairness constraints that includes DemParity and EqOpt as specific cases, and allows us to derive results about impact on ∆µ using general techniques. In the next section, we present further results that compare the fairness criteria, demonstrating the usefulness of our technical framework. 3.2 Comparing EqOpt and DemParity Our analysis of the acceptance rates of EqOpt and DemParity in Section 6 suggests that it is difficult to compare DemParity and EqOpt without knowing the full distributions π A , π B , which is necessary to compute the transfer function G(A→B) . In fact, we have found that settings exist both in which DemParity causes harm while EqOpt causes improvement and in which DemParity causes improvement while EqOpt causes harm. There cannot be one general rule as to which fairness criteria provides better outcomes in all settings. We now present simple sufficient conditions on the geometry of the distributions for which EqOpt is always better than DemParity in terms of ∆µA . Corollary 3.5 (EqOpt may avoid active harm where DemParity fails). Fix a selection rate β. Suppose π A , π B are identical up to a translation with µA < µB , i.e. π A (x) = π B (x + (µB − µA )). For simplicity, take ρ(x) to be linear in x. Suppose β> πA. x>µA Then there exists an interval [g1 , g2 ] ⊆ [0, 1], such that ∀gA > g1 , β EqOpt < β while ∀gA < g2 , β DemParity > β. In particular, when β = β0 , this implies DemParity causes active harm but EqOpt causes improvement for gA ∈ [g1 , g2 ], but for any gA such that DemParity causes improvement, EqOpt also causes improvement. To interpret the conditions under which Corollary 3.5 holds, consider when we might have β0 > x>µA π A . This is precisely when ∆µA ( x>µA π A ) > 0, that is, ∆µA > 0 for a policy that selects every individual whose score is above the group A mean, which is reasonable in reality. 10
11 .Indeed, the converse would imply that group A has such low scores that even selecting all above average individuals in A would hurt the average score. In such a case, Corollary 3.5 suggests that EqOpt is better than DemParity at avoiding active harm, because it is more conservative. A natural question then is: can EqOpt cause relative harm by being too stingy? Corollary 3.6 (DemParity never loans less than MaxUtil, but EqOpt might). Recall the definition of the TPR functions TPRj , and suppose that the MaxUtil policy τ MaxUtil is such that βAMaxUtil < βBMaxUtil and TPRA (τ MaxUtil ) > TPRB (τ MaxUtil ) (4) EqOpt DemParity Then βA < βAMaxUtil < βA . That is, EqOpt causes relative harm by selecting at a rate lower than MaxUtil. The above theorem shows that DemParity is never stingier than MaxUtil to the protected group A, as long as a A is disadvantaged in the sense that MaxUtil selects a larger proportion of B than A. On the other hand, EqOpt can select less of group A than MaxUtil, and by definition, cause relative harm. This is a surprising result about EqOpt, and this phenomenon arises from high levels of in- group inequality for group A. Moreover, we show in Appendix C that there are parameter settings where the conditions in Corollary 3.6 are satisfied even under a stringent notion of disadvantage we call CDF domination, described therein. 4 Relaxations of Constrained Fairness 4.1 Regularized fairness In many cases, it may be unrealistic for an institution to ensure that fairness constraints are met exactly. However, one can consider “soft” formulations of fairness constraints which either penalized the differences in acceptance rate (DemParity) or the differences in TPR (EqOpt). In Appendix B, we formulate these soft constraints as regularized objectives. For example, a soft-DemParity can be rendered as max U(τ ) − λΦ( π A , τ A − π B , τ B ) , (5) τ :=τ A ,τ B where λ > 0 is a regularization parameter, and Φ(t) is a convex regularization function. We show that the solutions to these objectives are threshold policies, and can be fully characterized in terms of the group-wise selection rate. We also make rigorous the notion that policies which solve the soft- constraint objective interpolate between MaxUtil policies at λ = 0 and hard-constrained policies (DemParity or EqOpt) as λ → ∞. This fact is clearly demonstrated by the form of the solutions in the special case of the regularization function Φ(t) = |t|, provided in the appendix. 4.2 Fairness Under Measurement Error Next, consider the implications of an institution with imperfect knowledge of scores. Under a simple model in which the estimate of an individual’s score X ∼ π is prone to errors e(X) such that X + e(X) := X ∼ π. Constraining the error to be negative results in the setting that scores are systematically underestimated. In this setting, it is equivalent to consider the CDF of underestimated distribution π to be dominated by the CDF true distribution π, that is 11
12 . x≥c π(x)≤ x≥c π(x) for all c ∈ [C]. Then we can compare the institution’s behavior under this estimation to its behavior under the truth. Proposition 4.1 (Underestimation causes underselection). Fix the distribution of B as π B and let β be the acceptance rate of A when the institution makes the decision using perfect knowledge of the distribution π A . Denote β as the acceptance rate when the group is instead taken as π A . Then DemParity DemParity βAMaxUtil > βAMaxUtil and βA > βA . If the errors are further such that the true TPR EqOpt EqOpt dominates the estimated TPR, it is also true that βA > βA . Because fairness criteria encourage a higher selection rate for disadvantaged groups (Corol- lary 3.2), systematic underestimation widens the regime of their applicability. Furthermore, since the estimated MaxUtil policy underloans, the region for relative improvement in the outcome curve (Figure 1) is larger, corresponding to more regimes under which fairness criteria can yield favorable outcomes. Thus the potential for measurement error should be a factor when motivating these criteria. 4.3 Outcome-based alternative As explained in the preceding sections, fairness criteria may actively harm disadvantaged groups. It is thus natural to consider a modified decision rule which involves the explicit maximization of ∆µA . In this case, imagine that the institution’s primary goal is to aid the disadvantaged group, subject to a limited profit loss compared to the maximum possible expected profit U MaxUtil . The corresponding problem is as follows. max ∆µA (τ A ) s.t. UAMaxUtil − U(τ ) < δ . (6) τA Unlike the fairness constrained objective, this objective no longer depends on group B and instead depends on our model of the mean score change in group A, ∆µA . Proposition 4.2 (Outcome-based solution). In the above setting, the optimal bank policy τ A is a threshold policy with selection rate β = min{β ∗ , β max }, where β ∗ is the outcome-optimal loan rate and β max is the maximum loan rate under the bank’s “budget”. The above formulation’s advantage over fairness constraints is that it directly optimizes the outcome of A and can be approximately implemented given reasonable ability to predict outcomes. Importantly, this objective shifts the focus to outcome modeling, highlighting the importance of domain specific knowledge. Future work can consider strategies that are robust to outcome model errors. 5 Optimality of Threshold Policies Next, we move towards statements of the main theorems underlying the results presented in Sec- tion 3. We begin by establishing notation which we shall use throughout. Recall that ◦ denotes the Hadamard product between vectors. We identify functions mapping X → R with vectors in RC . We also define the group-wise utilities Uj (τ j ) := π j (x)τ j (x)u(x) , (7) x∈X 12
13 .so that for τ = (τ A , τ B ), U(τ ) := gA UA (τ A ) + gB UB (τ B ). First, we formally describe threshold policies, and rigorously justify why we may always assume without loss of generality that the institution adopts policies of this form. Definition 5.1 (Threshold selection policy). A single group selection policy τ ∈ [0, 1]C is called a threshold policy if it has the form of a randomized threshold on score: 1, x > c τ c,γ = γ, x = c , for some c ∈ [C] and γ ∈ (0, 1] . (8) 0, x < c As a technicality, if no members of a population have a given score x ∈ X , there may be multiple threshold policies which yield equivalent selection rates for a given population. To avoid redundancy, we introduce the notation τ j ∼ =πj τ j to mean that the set of scores on which τ j and τ j differ has probability 0 under π j ; formally, x:τ j (x)=τ j (x) π j (x) = 0. For any distribution π j , ∼ =πj ∼ is an equivalence relation. Moreover, we see that if τ j =πj τ j , then τ j and τ j both provide the same utility for the institution, induce the same outcomes for individuals in group j, and have the same selection and true positive rates. Hence, if (τ A , τ B ) is an optimal solution to any of MaxUtil, EqOpt, or DemParity, so is any (τ A , τ B ) for which τ A ∼=πA τ A and τ B ∼ =πB τ B . For threshold policies in particular, their equivalence class under ∼ =πj is uniquely determined by the selection rate function, rπj (τ j ) := π j (x)τ j (x) , (9) x∈X which denotes the fraction of group j which is selected. Indeed, we have the following lemma (proved in Appendix A.1): Lemma 5.1. Let τ j and τ j be threshold policies. Then τ j ∼ =πj τ j if and only if rπj (τ j ) = rπj (τ j ). Further, rπj (τ j ) is a bijection from Tthresh (π j ) to [0, 1], where Tthresh (π j ) is the set of equivalence classes between threshold policies under ∼ =πj . Finally, π j ◦ rπ −1 (β ) is well defined. j j Remark that rπ −1 (β ) is an equivalence class rather than a single policy. However, π ◦ r −1 (τ ) is j j j πj j well defined, meaning that π j ◦ τ j = π j ◦ τ j for any two policies in the same equivalence class. Since all quantities of interest will only depend on policies τ j through π j ◦ τ j , it does not matter which representative of rπ−1 (β ) we pick. Hence, abusing notation slightly, we shall represent T j j thresh (π j ) by choosing one representative from each equivalence class under =πj . ∼ 3 It turns out the policies which arise in this away are always optimal in the sense that, for a given loan rate βj , the threshold policy rπ −1 (β ) is the (essentially unique) policy which maximizes j j both the institution’s utility and the utility of the group. Defining the group-wise utility, Uj (τ j ) := u(x)π j (x)τ j (x) , (10) x∈X we have the following result: 3 One way to do this is to consider the set of all threshold policies τ c,γ such that, γ = 1 if π j (c) = 0 and π j (c−1) > 0 if γ = 1 and c > 1. 13
14 .Proposition 5.1 (Threshold policies are preferable). Suppose that u(x) and ∆(x) are strictly increasing in x. Given any loaning policy τ j for population with distribution π j , then the policy τ thresh −1 (r (τ )) ∈ T := rπ j j πj j thresh (π j ) satisfies ∆µj (τ thresh j ) ≥ ∆µj (τ j ) and Uj (τ thresh j ) ≥ Uj (τ j ) . (11) Moreover, both inequalities hold with equality if and only if τ j ∼ =πj τ thresh j . The map τ j → rπ −1 (r (τ )) can be thought of transforming an arbitrary policy τ into a j πj j j threshold policy with the same selection rate. In this language, the above proposition states that this map never reduces institution utility or individual outcomes. We can also show that optimal MaxUtil and DemParity policies are threshold policies, as well as all EqOpt policies under an additional assumption: Proposition 5.2 (Existance of optimal threshold policies under fairness constraints). Suppose that u(x) is strictly increasing in x. Then all optimal MaxUtil policies (τ A , τ B ) satisfy τ j ∼ =πj −1 r (τ ) for j ∈ {A, B}. The same holds for all optimal DemParity policies, and if in addition rπ j πj j u(x)/ρ(x) is increasing, the same is true for all optimal EqOpt policies. To prove proposition 5.1, we invoke the following general lemma which is proved using standard convex analysis arguments (in Appendix A.2): Lemma 5.2. Let v ∈ RC , and let w ∈ RC >0 , and suppose either that v(x) is increasing in x, and v(x)/w(x) is increasing or, ∀x ∈ X , w(x) = 0. Let π ∈ SimplexC−1 and fix t ∈ [0, x∈X π(x) · w(x)]. Then any τ ∗ ∈ arg max v ◦ π, τ s.t. π ◦ w, τ = t (12) τ ∈[0,1]C satisfies τ ∗ ∼ −1 (r (τ ∗ )). Moreover, at least one maximizer τ ∗ ∈ T =π rπ π thresh (π) exists. Proof of Proposition 5.1. We will first prove Proposition 5.1 for the function Uj . Given our nom- inal policy τ j , let βj = rπj (τ j ). We now apply Lemma 5.2 with v(x) = u(x) and w(x) = 1. For this choice of v and w, v, τ = Uj (τ ) and that π j ◦ w, τ = rπj (τ ). Then, if τ j ∈ arg maxτ Uj (τ ) s.t. rπj (τ ) = βj , Lemma 12 implies that τ j ∼ −1 (r (τ )). =πj rπ j πj j ∼ −1 −1 On the other hand, assume that τ j =πj rπj rπj (τ j ) . We show that rπj (rπj (τ j )) is a maximizer; which will imply that τ j is a maximizer since τ j ∼ =πj rπ −1 (r (τ )) implies that U (τ ) = τ ∼ j πj j j j j =π j rπj (rπj (τ j )). By Lemma 5.2 there exists a maximizer τ ∗j ∈ Tthresh (π), which means that τ ∗j = −1 −1 (r (τ ∗ )). Since τ ∗ is feasible, we must have r (τ ∗ ) = r (τ ), and thus τ ∗ = r −1 (r (τ )), rπ j πj j j πj j πj j j πj πj j as needed. The same argument follows verbatim if we instead choose v(x) = ∆(x), and compute v, τ = ∆µj (τ ). We now argue Proposition 5.2 for MaxUtil, as it is a straightforward application of Lemma 5.2. We will prove Proposition 5.2 for DemParity and EqOpt separately in Sections 6.1 and 6.2. Proof of Proposition 5.2 for MaxUtil. MaxUtil follows from lemma 5.2 with v(x) = u(x), and t = 0 and w = 0. 14
15 .5.1 Quantiles and Concavity of the Outcome Curve To further our analysis, we now introduce left and right quantile functions, allowing us to specify thresholds in terms of both selection rate and score cutoffs. Definition 5.2 (Upper quantile function). Define Q to be the upper quantile function correspond- ing to π, i.e. C C Qj (β) = argmax{c : π j (x) > β} and Q+ j (β) := argmax{c : π j (x) ≥ β} . (13) x=c x=c Crucially Q(β) is continuous from the right, and Q+ (β) is continuous from the left. Further, Q(·) and Q+ (·) allow us to compute derivatives of key functions, like the mapping from selection rate β to the group outcome associated with a policy of that rate, ∆µ(rπ−1 (β)). Because we take π to have discrete support, all functions in this work are piecewise linear, so we shall need to distinguish between the left and right derivatives, defined as follows f (x + t) − f (x) f (y + t) − f (y) ∂− f (x) := lim and ∂+ f (y) := lim . (14) t→0− t t→0+ t For f supported on [a, b], we say that f is left- (resp. right-) differentiable if ∂− f (x) exists for all x ∈ (a, b] (resp. ∂+ f (y) exists for all y ∈ [a, b)). We now state the fundamental derivative computation which underpins the results to follow: Lemma 5.3. Let ex denote the vector such that ex (x) = 1, and ex (x ) = 0 for x = x. Then −1 (β) : [0, 1] → [0, 1]C is continuous, and has left and right derivatives π j ◦ rπ j −1 −1 ∂+ π j ◦ rπ j (β) = eQ(β) and ∂− π j ◦ rπ j (β) = eQ+ (β) . (15) The above lemma is proved in Appendix A.3. Moreover, Lemma 5.3 implies that the outcome curve is concave under the assumption that ∆(x) is monotone: Proposition 5.3. Let π be a distribution over C states. Then β → ∆µ(rπ −1 (β)) is concave. In −1 fact, if w(x) is any non-decreasing map from X → R, β → w, rπ (β) is concave. Proof. Recall that a univariate function f is concave (and finite) on [a, b] if and only (a) f is left- and right-differentiable, (b) for all x ∈ (a, b), ∂− f (x) ≥ ∂+ f (x) and (c) for any x > y, ∂− f (x) ≤ ∂+ f (y). Observe that ∆µ(rπ −1 (β)) = ∆, π ◦ r −1 (β) . By Lemma 5.3, π ◦ r −1 (β) has right and left π π derivatives eQ(β) and eQ+ (β) . Hence, we have that ∂+ ∆µ(βB ) = ∆(Q(βB )) and ∂− ∆µ(βB ) = ∆(Q+ (βB )) . (16) Using the fact that ∆(x) is monotone, and that Q ≤ Q+ , we see that ∂+ ∆µ(fπ−1 (βB )) ≤ ∂− ∆µ(fπ−1 (βB )), and that ∂∆µ(fπ−1 (βB )) and ∂+ ∆µ(fπ−1 (βB )) are non-increasing, from which it follows that ∆µ(fπ−1 (βB )) is concave. The general concavity result holds by replacing ∆(x) with w(x). 15
16 . Utility Contour Plot 1.0 group B selection rate 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 group A selection rate Figure 3: Considering the utility as a function of selection rates, fairness constraints correspond to restricting the optimization to one-dimensional curves. The DemParity (DP) constraint is a straight line with slope 1, while the EqOpt (EO) constraint is a curve given by the graph of G(A→B) . The derivatives considered throughout Section 6 are taken with respect to the selection rate βA (horizontal axis); projecting the EO and DP constraint curves to the horizontal axis recovers concave utility curves such as those shown in the lower panel of Figure 2 (where MaxUtil in is represented by a horizontal line through the MU optimal solution). 6 Proofs of Main Theorems We are now ready to present and prove theorems that characterize the selection rates under fairness constraints, namely DemParity and EqOpt. These characterizations are crucial for proving the results in Section 3. Our computations also generalize readily to other linear constraints, in a way that will become clear in Section 6.2. 6.1 A Characterization Theorem for DemParity In this section, we provide a theorem that gives an explicit characterization for the range of selection rates βA for A when the bank loans according to DemParity. Observe that the DemParity objective corresponds to solving the following linear program: max U(τ ) s.t. πA, τ A = πB, τ B . τ =(τ A ,τ B )∈[0,1]2C Let us introduce the auxiliary variable β := π A , τ A = π B , τ B corresponding to the selection rate which is held constant across groups, so that all feasible solutions lie on the green DP line in Figure 3. We can then express the following equivalent linear program: max U(τ ) s.t. β = π j , τ j , j ∈ {A, B} . τ =(τ A ,τ B )∈[0,1]2C ,β∈[0,1] This is equivalent because, for a given β, Proposition 5.2 says that the utility maximizing policies −1 (β). We now prove this: are of the form τ j = rπ j 16
17 .Proof of Proposition 5.2 for DemParity. Noting that rπj (τ j ) = π j , τ j , we see that, by Lemma 5.2, under the special case where v(x) = u(x) and w(x) = 1, the optimal solution (τ ∗A (β), τ ∗B (β)) for fixed rπA (τ A ) = rπB (τ B ) = β can be chosen to coincide with the threshold policies. Optimizing over β, the global optimal must coincide with thresholds. −1 (β), r −1 (β)), where β Hence, any optimal policy is equivalent to the threshold policy τ = (rπ A πB solves the following optimization: −1 −1 max U rπ A (β), rπ B (β) . (17) β∈[0,1] We shall show that the above expression is in fact a concave function in β, and hence the set of optimal selection rates can be characterized by first order conditions. This is presented formally in the following theorem: Theorem 6.1 (Selection rates for DemParity). The set of optimal selection rates β ∗ satisfying (17) − + forms a continuous interval [βDemParity , βDemParity ], such that for any β ∈ [0, 1], we have − β < βDemParity if gA u (QA (β)) + gB u (QB (β)) > 0 , + β > βDemParity if gA u Q+ + A (β) + gB u QB (β) < 0 . Proof. Note that we can write −1 −1 −1 (β) + g u, π ◦ r −1 (β) . U rπ A (β), rπ B (β) = gA u, π A ◦ rπ A B B πB Since u(x) is non-decreasing in x, Proposition 5.3 implies that β → U rπ −1 (β), r −1 (β) is A πB ∗ − + concave in β. Hence, all optimal selection rates β lie in an interval [β , β ]. To further characterize this interval, let us us compute left- and right-derivatives. −1 −1 −1 −1 ∂+ U rπ A (β), rπ B (β) = ∂+ gA u, π A ◦ rπ A (β) + ∂+ gB u, π B ◦ rπ B (β) −1 −1 = gA u, ∂+ π A ◦ rπ A (β) + gB u, ∂+ π B ◦ rπ B (β) Lemma 5.3 = gA u, eQA (β) + gB u, eQB (β) = gA u(QA (β)) + gB u(QB (β)) . The same argument shows that −1 −1 ∂− U((rπ A (β), rπ B (β))) = gA u(Q+ + A (β)) + gB u(QB (β)). By concavity of U rπ −1 (β), r −1 (β) , a positive right derivative at β implies that β < β ∗ for all β ∗ A πB satisfying (17), and similarly, a negative left derivative at β implies that β > β ∗ for all β ∗ satisfying (17). With a result of the above form, we can now easily prove statements such as that in Corollary 3.3 (see appendix C for proofs), by fixing a selection rate of interest (e.g. β0 ) and inverting the 17
18 .inequalities in Theorem 6.1 to find the exact population proportions under which, for example, DemParity results in a higher selection rate than β0 . 6.2 EqOpt and General Constraints Next, we will provide a theorem that gives an explicit characterization for the range of selection rates βA for A when the bank loans according to EqOpt. Observe that the EqOpt objective corresponds to solving the following linear program: max U(τ ) s.t. wA ◦ π A , τ A = wB ◦ π B , τ B , (18) τ =(τ A ,τ B )∈[0,1]2C ρ where wj = ρ,π j . This problem is similar to the demographic parity optimization in (17), except for the fact that the constraint includes the weights. Whereas we parameterized demographic parity solutions in terms of the acceptance rate β in equation (17), we will parameterize equation (18) in terms of the true positive rate (TPR), t := wA ◦ π A , τ A . Thus, (18) becomes max max gj Uj (τ j ) s.t. wj ◦ π j , τ j = t, j ∈ {A, B} , (19) t∈[0,tmax ] (τ A ,τ B )∈[0,1]2C j∈{A,B} where tmax = minj∈{A,B} { π j , wj } is the largest possible TPR. The magenta EO curve in Figure 3 illustrates that feasible solutions to this optimization problem lie on a curve parametrized by t. Note that the objective function decouples for j ∈ {A, B} for the inner optimization problem, max gj Uj (τ j ) s.t. wj ◦ π j , τ j = t . (20) τ j ∈[0,1]C j∈{A,B} We will now show that all optimal solutions for this inner optimization problem are π j -a.e. equal to −1 (β ), depending only on the resulting selection a policy in Tthresh (π j ), and thus can be written as rπ j j rate. Proof of Proposition 5.2 for EqOpt. We apply Lemma 5.2 to the inner optimization in (20) with v(x) = u(x) and w(x) = ρ(x) ρ,π j . The claim follows from the assumption that u(x)/ρ(x) is increasing by optimizing over t. This selection rate βj is uniquely determined by the TPR t (proof appears in Appendix B.1): Lemma 6.1. Suppose that w(x) > 0 for all x. Then the function −1 Tj ,wj (β) := rπ j (β), π j ◦ wj is a bijection from [0, 1] to [0, π j , w ]. Hence, for any t ∈ [0, tmax ], the mapping from TPR to acceptance rate, Tj−1 ,wj (t), is well defined −1 −1 and any solution to (20) is π j -a.e. equal to the policy rπj (Tj ,wj (t)). Thus (19) reduces to −1 max gj Uj rπ j Tj−1 ,wj (t) . (21) t∈[0,tmax ] j∈{A,B} 18
19 . The above expression parametrizes the optimization problem in terms of a single variable. We shall show that the above expression is in fact a concave function in t, and hence the set of optimal selection rates can be characterized by first order conditions. This is presented formally in the following theorem: Theorem 6.2 (Selection rates for EqOpt). The set of optimal selection rates β ∗ for group A satsi- − + fying (19) forms a continuous interval [βEqOpt , βEqOpt ], such that for any β ∈ [0, 1], we have (A→B) − u(QA (β)) u(QB (Gw (β))) β < βEqOpt if gA + gB (A→B) >0, wA (QA (β)) wB (QB (Gw (β))) (A→B) + u(Q+ A (β)) u(Q+ B (Gw (β))) β> βEqOpt if gA + + gB (A→B) <0. wA (QA (β)) + wB (QB (Gw (β))) (A→B) −1 −1 Here, Gw (β) := TB,w B (TA,w A (β)) denotes the (well-defined) map from selection rates βA for A to the selection rate βB for B such that the policies τ ∗A := rπ −1 (β ) and τ ∗ := r −1 (β ) satisfy the A A B πB B constraint in (18). Proof. Starting with the equivalent problem in (21), we use the concavity result of Lemma B.1. Because the objective function is the positive weighted sum of two concave functions, it is also concave. Hence, all optimal true positive rates t∗ lie in an interval [t− , t+ ]. To further characterize [t− , t+ ], we can compute left- and right-derivatives, again using the result of Lemma B.1. −1 ∂+ gj Uj rπ j (Tj−1 ,wj (t)) −1 = gA ∂+ UA rπ A (TA−1 −1 −1 ,wA (t)) + gA ∂+ UA rπ A (TA ,wA (t)) j∈{A,B} u(QA (TA−1 ,wA (t))) u(QB (TB−1 ,wB (t))) = gA + gB wA (QA (TA−1 ,wA (t))) wB (QB (TB−1 ,wB (t))) The same argument shows that −1 u(Q+ −1 A (TA ,wA (t)) u(Q+ −1 B (TB ,wB (t))) ∂− gj Uj rπ (Tj−1 ,wj (t)) = gA −1 + gB −1 . j∈{A,B} j wA (Q+ A (TA ,wA (t))) wB (Q+ B (TB ,wB (t))) By concavity, a positive right derivative at t implies that t < t∗ for all t∗ satisfying (21), and similarly, a negative left derivative at t implies that t > t∗ for all t∗ satisfying (21). Finally, by Lemma 6.1, this interval in t uniquely characterizes an interval of acceptance rates. Thus we translate directly into a statement about the selection rates β for group A by seeing that (A→B) TA−1 −1 ,wA (t) = β and TB ,wB (t) = Gw (β). Lastly, we remark that the results derived in this section go through verbatim for any linear constraint of the form w, π A ◦ τ A = w, π B ◦ τ B , as long as u(x)/w(x) is increasing in x, and w(x) > 0. 19
20 . Repay Probability by Group 1.0 1.0 Groups black 0.8 white 0.8 repay probability 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 300 400 500 600 700 800 0.0 0.2 0.4 0.6 0.8 1.0 score CDF Figure 4: The empirical payback rates as a function of credit score and CDF for both groups from the TransUnion TransRisk dataset. 7 Simulations We examine the outcomes induced by fairness constraints in the context of FICO scores for two race groups. FICO scores are a proprietary classifier widely used in the United States to predict credit worthiness. Our FICO data is based on a sample of 301,536 TransUnion TransRisk scores from 2003 [US Federal Reserve, 2007], preprocessed by Hardt et al. [2016]. These scores, corresponding to x in our model, range from 300 to 850 and are meant to predict credit risk. Empirical data labeled by race allows us to estimate the distributions πj , where j represents race, which is restricted to two values: white non-Hispanic (labeled “white” in figures), and black. Using national demographic data, we set the population proportions to be 18% and 82%. Individuals were labeled as defaulted if they failed to pay a debt for at least 90 days on at least one account in the ensuing 18-24 month period; we use this data to estimate the success probability given score, ρj (x), which we allow to vary by group to match the empirical data (see Figure 4). Our outcome curve framework allows for this relaxation; however, this discrepancy can also be attributed to group-dependent mismeasurement of score, and adjusting the scores accordingly would allow for a single ρ(x). We use the success probabilities to define the affine utility and score change functions defined in Example 2.1. We model individual penalties as a score drop of c− = −150 in the case of a default, and in increase of c+ = 75 in the case of successful repayment. In Figure 5, we display the empirical CDFs along with selection rates resulting from different loaning strategies for two different settings of bank utilities. In the case that the bank experiences a loss/profit ratio of uu− + = −10, no fairness criteria surpass the active harm rate β0 ; however, in the case of uu− + = −4, DemParity overloans, in line with the statement in Corollary 3.3. These results are further examined in Figure 6, which displays the normalized outcome curves and the utility curves for both the white and the black group. To plot the MaxUtil utility curves, the group that is not on display has selection rate fixed at β MaxUtil . In this figure, the top panel corresponds to the average change in credit scores for each group under different loaning rates β; the bottom panels shows the corresponding total utility U (summed over both groups and weighted by group population sizes) for the bank. 20
21 . Loaning Decisions Profit/Loss Ratio: 1/4 Profit/Loss Ratio: 1/10 1.0 1.0 fraction of group above Groups 0.8 0.8 Black White 0.6 0.6 Criteria harm 0.4 0.4 MU DP 0.2 0.2 EO 0.0 0.0 300 400 500 600 700 800 300 400 500 600 700 800 score score Figure 5: The empirical CDFs of both groups are plotted along with the decision thresholds resulting from MaxUtil, DemParity, and EqOpt for a model with bank utilities set to (a) uu− + = −4 and (b) u− u+ = −10. The threshold for active harm is displayed; in (a) DemParity causes active harm while in (b) it does not. EqOpt and MaxUtil never cause active harm. Figure 6 highlights that the position of the utility optima in the lower panel determines the loan (selection) rates. In this specific instance, the utility and change ratios are fairly close, uu− + = −4, and cc− + = −2, meaning that the bank’s profit motivations align with individual outcomes to some extent. Here, we can see that EqOpt loans much closer to optimal than DemParity, similar to the setting suggested by Corollary 3.2. Although one might hope for decisions made under fairness constraints to positively affect the black group, we observe the opposite behavior. The MaxUtil policy (solid orange line) and the EqOpt policy result in similar expected credit score change for the black group. However, DemParity (dashed green line) causes a negative expected credit score change in the black group, corresponding to active harm. For the white group, the bank utility curve has almost the same shape under the fairness criteria as it does under MaxUtil, the main difference being that fairness criteria lowers the total expected profit from this group. This behavior stems from a discrepancy in the outcome and profit curves for each population. While incentives for the bank and positive results for individuals are somewhat aligned for the majority group, under fairness constraints, they are more heavily misaligned in the minority group, as seen in graphs (left) in Figure 6. We remark that in other settings where the unconstrained profit maximization is misaligned with individual outcomes (e.g., when uu− + = −10), fairness criteria may perform more favorably for the minority group by pulling the utility curve into a shape consistent with the outcome curve. By analyzing the resulting affects of MaxUtil, DemParity, and EqOpt on actual credit score lending data, we show the applicability of our model to real-world applications. In particular, some results shown in Section 3 hold empirically for the FICO TransUnion TransRisk scores. 21
22 . Outcome Curves 40 Black 40 White 20 20 score change 0 0 -20 -20 -40 -40 -60 -60 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Utility Curves MU 1.00 1.00 Black White DP 0.75 0.75 EO 0.50 0.50 0.25 0.25 profit 0.00 0.00 -0.25 -0.25 -0.50 -0.50 -0.75 -0.75 -1.00 -1.00 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 selection rate selection rate Figure 6: The outcome and utility curves are plotted for both groups against the group selection rates. The relative positions of the utility maxima determine the position of the decision rule thresholds. We hold uu− + = −4 as fixed. 22
23 .8 Conclusion and Future Work We argue that without a careful model of delayed outcomes, we cannot foresee the impact a fairness criterion would have if enforced as a constraint on a classification system. However, if such an accurate outcome model is available, we show that there are more direct ways to optimize for positive outcomes than via existing fairness criteria. Our formal framework exposes a concise, yet expressive way to model outcomes via the expected change in a variable of interest caused by an institutional decision. This leads to the natural concept of an outcome curve that allows us to interpret and compare solutions effectively. In essence, the formalism we propose requires us to understand the two-variable causal mechanism that translates decisions to outcomes. Depending on the application, such an understanding might necessitate greater domain knowledge and additional research into the specifics of the application. This is consistent with much scholarship that points to the context-sensitive nature of fairness in machine learning. An interesting direction for future work is to consider other characteristics of impact beyond the change in population mean. Variance and individual-level outcomes are natural and impor- tant considerations. Moreover, it would be interesting to understand the robustness of outcome optimization to modeling and measurement errors. Acknowledgements We thank Lily Hu, Aaron Roth, and Cathy O’Neil for discussions and feedback on an earlier version of the manuscript. We thank the students of CS294: Fairness in Machine Learning (Fall 2017, University of California, Berkeley) for inspiring class discussions and comments on a presentation that was a precursor of this work. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1752814. 23
24 .References Solon Barocas and Andrew D. Selbst. Big data’s disparate impact. California Law Review, 104, 2016. Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. Building classifiers with independency constraints. In Proc. IEEE ICDMW, ICDMW ’09, pages 13–18, 2009. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. FATML, 2016. Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubra- manian. Runaway feedback loops in predictive policing. arXiv preprint arXiv:1706.09847, 2017. Executive Office of the President. Big data: A report on algorithmic systems, opportunity, and civil rights. Technical report, White House, May 2016. Dean P Foster and Rakesh V Vohra. An economic argument for affirmative action. Rationality and Society, 4(2):176–188, 1992. Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and Ansgar Walther. Predictably unequal? the effects of machine learning on credit markets. SSRN, 2017. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. In Proc. 30th NIPS, 2016. Lily Hu and Yiling Chen. A short-term intervention for long-term fairness in the labor market. In Proc. 27th WWW, 2018. Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Proc. 30th NIPS, pages 325–333, 2016. Alexandra Kalev, Frank Dobbin, and Erin Kelly. Best Practices or Best Guesses? Assessing the Efficacy of Corporate Affirmative Action and Diversity Policies. American Sociological Review, 71(4):589–617, 2006. Stephen N. Keith, Robert M. Bell, August G. Swanson, and Albert P. Williams. Effects of affir- mative action in medical schools. New England Journal of Medicine, 313(24):1519–1525, 1985. Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch¨ olkopf. Avoiding discrimination through causal reasoning. In In Proc. 30th NIPS, pages 656–666, 2017. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. Proc. 8th ITCS, 2017. Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In In Proc. 30th NIPS, pages 4069–4079, 2017. Razieh Nabi and Ilya Shpitser. Fair inference on outcomes. arXiv:1705.10378v1, 2017. 24
25 .Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. In Advances in Neural Information Processing Systems 30, pages 5684–5693, 2017. Stephen Ross and John Yinger. The Color of Credit: Mortgage Discrimination, Research Method- ology, and Fair-Lending Enforcement. MIT Press, Cambridge, 2006. US Federal Reserve. Report to the congress on credit scoring and its effects on the availability and affordability of credit, 2007. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. Fair- ness Constraints: Mechanisms for Fair Classification. In Proc. 20th AISTATS, pages 962–970. PMLR, 2017. A Optimality of Threshold Policies A.1 Proof of Lemma 5.1 We begin with the first statement of the lemma. Suppose τ j ∼ =πj τ j . Then there exists a set S ⊂ X such that π j (x) = 0 for all x ∈ S, and for all x ∈ / S, τ j (x) = τ j (x). Thus, rπ (τ j ) − rπj (τ j ) = π j (x)(τ j (x) − τ j (x)) x∈X = π j (x)(τ j (x) − τ j (x)) = 0 . x∈S Conversely, suppose that rπj (τ j ) = rπj (τ j ). Let τ j = τ c,γ and τ j = τ c ,γ as in Definition 5.1. We now have the following cases: 1. Case 1: c = c . Then τ j (x) = τ j (x) for all x ∈ X − {c}. Hence, 0 = rπ (τ j ) − rπj (τ j ) = π(x)(τ j (c) − τ j (c)) . This implies that either τ j (c) = τ j (c), and thus τ j (x) = τ j (x) for all x ∈ X , or otherwise π(c) = 0, in which case we still have τ j ∼ =πj τ j (since the two policies agree every outside the set {c}). 2. Case 2: c = c . We assume assume without loss of generality that c < c ≤ C. Since the policies τ c ,1 and τ c +1,0 are identity for c < C, we may also assume without loss of generality that γ ∈ [0, 1). Thus for all x ∈ S := {c , c + 1, . . . , C}, we have τ j (x) < τ j (x). This implies that 0 = rπ (τ j ) − rπj (τ j ) = π j (x)(τ j (x) − τ j (x)) x∈S ≥ min(τ j (c) − τ j (x)) · π(x) . x∈S x∈S 25
26 . Since minx∈S (τ j (c) − τ j (x)) > 0, it follows that x∈S π j (x) = 0, whence τ j ∼ =πj τ j . Next, we show that rπ is a bijection from Tthresh (π) → [0, 1]. That rπ is injective follows immediately from the fact if rπj (τ ) = rπj (τ j ), then τ j ∼ =πj τ j . To show it is surjective, we exhibit for every β ∈ [0, 1] a threshold policy τ c,γ for which rπj (τ c,γ ) = β. We may assume β < 1, since the all-ones policy has a selection rate of 1. Recall the definition of the inverse CDF C Qj (β) := argmax{c : π(x) > β} . x=c C C Since β < 1, Qj (β) ≤ C. Let β+ = x=Qj (β) π(x), and let β− = x=Qj (β)+1 π(x). Note that by β−β− definition, we have β− ≤ β < β+ , and β+ − β− = π(Qj (β)). Hence, if we define γ = β+ −β− , we have C rπj (τ Qj (β),γ ) = π(Qj (β))γ + π(x) = β− + (β+ − β− )γ = β− + β − β− = β . x=Qj (β)+1 A.2 Proof of Lemma 5.2 Given τ ∈ [0, 1]C , we define the normal cone at τ as NC(τ ) := ConicalHull{z : τ + z ∈ [0, 1]C }. We can describe NC(τ ) explicitly as: NC(τ ) := {z ∈ RC : z i ≤ 0 if τ i = 0, z i ≥ 0 if τ i = 1} . Immediately from the above definition, we have the following useful identity, which is that for any vector g ∈ RC , τ (x) = 0 g(x) < 0 g, z ≤ 0 ∀z ∈ NC(τ ), if and only if ∀x ∈ X , τ (x) = 1 g(x) > 0 . (22) τ (x) ∈ [0, 1] g(x) = 0 Now consider the optimization problem (12). By the first order KKT conditions, we know that for any optimizer τ ∗ of the above objective, there exists some λ ∈ R such that, for all z ∈ NC(τ ∗ ) z, v ◦ π + λπ ◦ w ≤ 0 . By (22), we must have that 0 π(x)(v(x) + λw(x)) < 0 τ ∗ (x) = 1 π(x)(v(x) + λw(x)) > 0 . ∈ [0, 1] π(x)(v(x) + λw(x)) = 0 Now τ ∗ (x) is not necessarily a threshold policy. To conclude the theorem, it suffices to exhibit a threshold policy τ ∗ such that τ ∗ (x) ∼ =π τ ∗ . (Note that τ ∗ (x) will also be feasible for the constraint, and have the same objective value; hence τ ∗ will be optimal as well.) 26
27 . Given τ ∗ and λ, let c∗ = min{c ∈ X : v(x) + λw(x) ≥ 0}. If either (a) w(x) = 0 for all x ∈ X and v(x) is strictly increasing or (b) v(x)/w(x) is strictly increasing, then the modified policy 0 x < c∗ τ ∗ (x) = τ ∗ (x) x = c∗ , 1 x > c∗ is a threshold policy, and τ ∗ (x) ∼ =π τ ∗ . Moreover, w, τ ∗ = w, τ ∗ and π, τ ∗ = π, τ ∗ , which implies that τ ∗ is an optimal policy for the objective in Lemma 5.2. A.3 Proof of Lemma 5.3 We shall prove −1 ∂+ π j ◦ rπ j (β) = eQj (β) , (23) where the derivative is with respect to β. The computation of the left-derivative is analogous. −1 (β) does not Since we are concerned with right-derivatives, we shall take β ∈ [0, 1). Since π j ◦ rπ j −1 , we can choose a cannonical representation for r −1 . depend on the choice of representative for rπ j πj In Section A.1, we saw that the threshold policy τ Qj (β),γ(β) had acceptance rate β, where we had defined C C β+ = π(x) and β− = π(x) , (24) x=Qj (β) x=Qj (β)+1 β − β− γ(β) = . (25) β+ − β− Note then that for each x, τ Qj (β),γ(β) (x) is piece-wise linear, and thus admits left and right deriva- tives. We first claim that ∀x ∈ X \ {Qj (β)}, ∂+ τ Qj (β),γ(β) (x) = 0 . (26) To see this, note that Qj (β) is right continuous, so for all sufficiently small, Qj (β + ) = Qj (β). Hence, for all sufficiently small and all x = Q(β), we have τ Qj (β+ ),γ(β+ ) (x) = τ Qj (β+ ),γ(β+ ) (x), as needed. Thus, Equation (26) implies that ∂+ π j ◦ rπ−1 (β) is supported on x = Q (β), and hence j j −1 ∂+ π j ◦ rπ j (β) = ∂+ π j (x)τ Qj (β),γ(β) (x) x=Qj (β) · eQj (β) . To conclude, we must show that ∂+ π j (x)τ Qj (β),γ(β) (x) x=Qj (β) = 1. To show this, we have 1 = ∂+ (β) = ∂+ (rπj (τ Qj (β),γ(β) )) since rπj (τ Qj (β),γ(β) ) = β ∀β ∈ [0, 1) = ∂+ π(x) · τ Qj (β),γ(β) (x) x∈X 27
28 . = ∂+ π(x) · τ Qj (β),γ(β) (x) x=Qj (β) , as needed. B Characterization of Fairness Solutions B.1 Derivative Computation for EqOpt In this section, we prove Lemma 6.1, which we recall below. Lemma 6.1. Suppose that w(x) > 0 for all x. Then the function −1 Tj ,wj (β) := rπ j (β), π j ◦ wj is a bijection from [0, 1] to [0, π j , w ]. We will prove Lemma 6.1 in tandem with the following derivative computation which we applied in the proof of Theorem 6.2. Lemma B.1. The function −1 Uj (t; wj ) := Uj rπ j Tj−1 ,wj (t) is concave in t and has left and right derivatives u(Qj (Tj−1 ,wj (t))) u(Q+ −1 j (Tj ,wj (t))) ∂+ Uj (t; wj ) = and ∂− Uj (t; wj ) = . wj (Qj (Tj−1 ,wj (t))) wj (Q+ −1 j (Tj ,wj (t))) Proof of Lemmas 6.1 and B.1. Consider a β ∈ [0, 1]. Then, π j ◦ rπ −1 (β) is continuous and left and j right differentiable by Lemma 5.3, and its left and right derivatives are indicator vectors eQj (β) and −1 (β) has left and right derivatives w (Q(β)) eQ+ (β) , respectively. Consequently, β → wj , π j ◦ rπ j j j and wj (Q+ (β)), respectively; both of which are both strictly positive by the assumption w(x) > 0. −1 (β) is strictly increasing in β, and so the map is injective. It is also Hence, Tj ,wj (β) = wj , π j ◦ rπ j surjective because β = 0 induces the policy τ j = 0 and β = 1 induces the policy τ j = 1 (up to π j -measure zero). Hence, Tj ,wj (β) is an order preserving bijection with left- and right-derivatives, and we can compute the left and right derivatives of its inverse as follows: 1 1 ∂+ Tj−1 ,wj (t) = = , ∂+ Tj ,wj (β) β=Tj−1 ,wj (t) wj (Qj (Tj−1 ,wj (t))) and similarly, ∂− Tj−1 ,wj (t) = 1 wj (Q+ (Tj−1 . Then we can compute that ,wj (t))) ∂+ Uj (rπj (Tj−1 ,wj (t))) = ∂+ U(rπ j (β)) β=Tj−1 · ∂+ Tj ,wj (sup(t)) ,wj (t)) u(Qj (Tj−1 ,wj (t))) = . wj (Qj (Tj−1 ,wj (t))) −1 U (Q+ j (Tj ,wj (t))) and similarly ∂− Uj (rπj (Tj ,wj (t))) = wj (Q+ −1 . One can verify that for all t1 < t2 , one has that j (Tj ,wj (t))) 28
29 .∂+ Uj (rπj (Tj−1 −1 −1 −1 ,wj (t1 ))) ≥ ∂− Uj (rπ j (Tj ,wj (t2 ))), and that for all t, ∂+ Uj (rπ j (Tj ,wj (t))) ≤ ∂− Uj (rπ j (Tj ,wj (t))). These facts establish that the mapping t → Uj (rπj (Tj−1 ,wj (t))) is concave. B.2 Characterizations Under Soft Constraints Given a convex penalty Φ : R → R≥0 , and λ ∈ R≥0 , one can write down the general form for soft constrained utility optimization max U(τ ) − λΦ( wA ◦ π A , τ A − wB ◦ π B , τ B ) , (27) τ =(τ A ,τ B ) where wA and wB represent generic constraints. Again, we shall assume that for j ∈ {A, B}, u(x)/wj (x) is non-decreasing. Recall that for wj = (1, 1, . . . , 1), one recovers the soft version of ρ DemParity, whereas for wj = ρ,π j , one recovers the soft constrained version of EqOpt. The same argument presented in Section 6.2 shows that the optimal policies are of the form −1 τ j = rπ j (Tj−1 ,wj (tj )) , where (tA , tB ) are solutions to the following optimization problem: −1 max gA UA (rπ A (TA−1 −1 −1 ,wA (tA ))) + gB UB (rπ B (TB ,wB (tB ))) − λΦ(tA − tB ) . tA ∈[0, π A ,wA ],tB ∈[0, π B ,wB ] The following lemma gives us a first order characterization of these optimal TPRs, (tA , tB ). Lemma B.2. All optimal policies are equivalent to threshold policies with selection rate (βA , βB ) which satisfy u(Q+ A (βA )) u(QA (βA )) 0 wA (QA (βB )) − λ∂+ Φ(∆), w + − λ∂− Φ(∆) A (QA (βA )) ∈ u(Q+ , (28) 0 u(QB (βB )) B (β)) wB (QB (βB )) + λ∂− Φ(∆), wB (Q+ + λ∂+ Φ(∆) B (βB )) where ∆ = tA − tB = TA ,wA (βA ) − TB ,wB (βB ). Proof. Let ∂(·) denote the super-gradient set of a concave function. Note that if F is left-and-right differentiable and concave, then ∂F (x) = [∂+ F (x), ∂− F (x)]. By concavity of Uj and convexity of Φ, we must have that 0 −1 ∈ ∂ Uj rπ Tj−1 ,wj (tj ) − λΦ(tA − tB ) 0 j j∈{A,B} −1 (T −1 (t )) + ∂ {−λΦ(t − t )} ∂UA rπ A ,w A A tA A B = A −1 (T −1 (t )) + ∂ {−λΦ(t − t )} ∂UA rπ B B ,w B B tB A B −1 (T −1 (t )) − λ∂Φ(t) ∂UA rπ A A ,w A A t=tA −tB = ∂UB rπB (TB−1 −1 ,wB B )) + λ∂Φ(t) (t t=tA −tB −1 (T −1 (t )) [∂+ UA rπ − λ∂+ Φ(t) −1 (T −1 (t )) − λ∂ Φ(t) , ∂− UA rπ ] A A ,w A A t=tA −tB A A ,w A A − t=tA −tB = −1 −1 −1 −1 [∂+ UB rπB (TB ,wB (tB )) + λ∂− Φ(t) t=tA −tB , ∂− UB rπB (TA ,wA (tB )) + λ∂+ Φ(t) t=tA −tB ] 29