FAIRNESS IN MACHINE LEARNING: A SURVEY 阅读笔记
创始人
2024-03-04 18:19:21
0

论文链接

        刚读完一篇关于机器学习领域研究公平性的综述,这篇综述想必与其有许多共通之处,重合部分不再整理笔记,可详见上一篇论文的笔记:

A Survey on Bias and Fairness in Machine Learning 阅读笔记_Catherine_he_ye的博客

Section 1 引言

        这篇文章试图在机器学习文献中提供不同的思想流派和减轻(社会)偏见和增加公平的方法的概述。它将方法组织成广泛接受的预处理、在处理和后处理方法框架,再细分为11个方法领域。尽管大多数文献强调二元分类,但是关于回归、推荐系统、无监督学习和自然语言处理方面的公平性的讨论也与当前可用的开源库一起提供。最后本文总结了公平研究面临的四个难题。

Section 2 机器学习中的公平性:关键的方法论组成部分

        虽然不是所有的公平ML方法都符合下面这个框架,但它提供了一个很好理解的参考点,并作为ML中公平方法分类中的一个维度。

 2.1 Sensitive and Protected Variables and (Un)privileged Groups

Most approaches to mitigate unfairness, bias, or discrimination are based on the notion of protected or sensitive variables (we will use the terms interchangeably) and on (un)privileged groups: groups (often defifined by one or more sensitive variables) that are disproportionately (less) more likely to be positively classifified.
1. 法律明确定义的——“受保护” 2. 但仍需关注是否应该保护其他少数变量,有一些工作专注于识别潜在敏感变量 3. 有些变量不是严格敏感的,但与一个或多个敏感变量有关系——“related” variables 4. 不考虑这些“related” variables可能会错误地假设已经产生了一个公平的ML模型\rightarrowincrease the risk of discrimination 5. 有关Proxy 译为代理 可以先看这篇解释的代理变量部分 下表提供了一些敏感变量和潜在代理的示例

 2.2 Metrics

Metrics usually either emphasize individual (e.g. everyone is treated equal), or group fairness, where the latter is further differentiated to within group (e.g. women vs. men) and between group (e.g. young women vs. black men) fairness.

Increasing fairness often results in lower overall accuracy or related metrics, leading to the necessity of analyzing potentially achievable trade-offs in a given scenario.

2.3 Pre-processing 预处理

        在一个“已修复的”数据集上训练一个模型,预处理被认为是数据科学流水线中最灵活的部分,因为它对随后应用的建模技术的选择不做任何假设。

2.4 In-processing 在处理 经常将一个或多个公平度量合并到模型优化函数中,以求收敛到一个最大化性能和公平化的模型参数。 2.5 Post-processing 后处理 倾向于将transfer应用于模型的输出,以提高预测的公平性。后处理是最灵活的方法之一,因为它只需涉及预测结果和敏感的属性信息,而不需涉及实际的算法和ML模型。这使得它们适用于ML的黑箱场景。 2.6 pre-processing vs. in-processing vs. post-processing
A distinct advantage of pre- and post-processing approaches is that they do not modify the ML method explicitly. However, they have no direct control over the optimization function of the ML model itself.This means that (open source) ML libraries can be leveraged unchanged for model training. Only in-processing approaches can optimize notions of fairness during model training. Yet, this requires the optimization function to be either accessible, replaceable,  and/or modififiable, which may not always be the case.

 Section 3 度量公平与偏见

3.1 Abstract Fairness Criteria

        大多数关于公平性的定量定义都围绕着一个(二类)分类器的三个基本方面:

        ① 敏感变量S(区别受保护群体和非保护群体);② 目标变量Y(真实的类别);③ 分类分数R(预测的分类结果)

        基于此三要素,general fairness desiderata被分为三个“非歧视”标准:

        ① Independence:评分R独立于敏感变量S,e.g., Statistical/Demographic Parity.

        ② Separation:在已知目标变量Y值的条件下,评分R独立于敏感变量S,e.g., Equalized Odds和Equal Opportunity.

        ③ Suffificiency:在已知评分R的条件下,目标变量Y独立于敏感变量S.

3.2 Group Fairness Metrics          3.2.1 Parity-based Metrics
Parity-based metrics typically consider the predicted positive rates, i.e., P_{r}(\widehat{y}=1), across different groups.
e.g., Statistical/Demographic Parity: P_{r}(\widehat{y}=1|g_{i})=P_{r}(\widehat{y}=1|g_{j});         Disparate Impact: \frac{P_{r}(\widehat{y}=1|g_{1})}{P_{r}(\widehat{y}=1|g_{2})}. 3.2.2 Confusion Matrix-based Metrics  
While parity-based metrics typically consider variants of the predicted positive rate P_{r}(\widehat{y}=1), confusion matrix-based metrics take into consideration additional aspects such as True Positive Rate (TPR), True Negative Rate (TNR), False Positive Rate (FPR), and False Negative Rate (FNR).

 e.g., Equal Opportunity: 考虑真阳性,P_{r}(\widehat{y}=1|y=1\&g_{i})=P_{r}(\widehat{y}=1|y=1\&g_{j})

         Equalized Odds: 考虑真阳性和假阳性,P_{r}(\widehat{y}=1|y=1\&g_{i})=P_{r}(\widehat{y}=1|y=1\&g_{j})\\ \& \ \ P_{r}(\widehat{y}=1|y=0\&g_{i})=P_{r}(\widehat{y}=1|y=0\&g_{j})

         Overall accuracy equality: 考虑准确性,P_{r}(\widehat{y}=1|y=1\&g_{i})+P_{r}(\widehat{y}=0|y=0\&g_{i}) \\ = \ P_{r}(\widehat{y}=1|y=1\&g_{j})+P_{r}(\widehat{y}=0|y=0\&g_{j})

         Conditional use accuracy equality: 有点不太懂,但是公式在这:P_{r}(y=1|\widehat{y}=1\&g_{i})=P_{r}(y=1|\widehat{y}=1\&g_{j})\\ \& \ \ P_{r}(y=0|\widehat{y}=0\&g_{i})=P_{r}(y=0|\widehat{y}=0\&g_{j})

         Treatment equality: 考虑假阳性与假阴性之比,\frac{P_{r}(\widehat{y}=1|y=0\&g_{i})}{P_{r}(\widehat{y}=0|y=1\&g_{i})}= \frac{P_{r}(\widehat{y}=1|y=0\&g_{j})}{P_{r}(\widehat{y}=0|y=1\&g_{j})}          Equalizing disincentives: 考虑真阳性与假阳性之差,P_{r}(\widehat{y}=1|y=1\&g_{i})-P_{r}(\widehat{y}=1|y=0\&g_{i}) \\ = \ P_{r}(\widehat{y}=1|y=1\&g_{j})-P_{r}(\widehat{y}=1|y=0\&g_{j}) Conditional Equal Opportunity: 指定特定属性a上的机会相等,其中τ是一个阈值,P_{r}(\widehat{y}\geq \tau |g_{i} \& y< \tau \& A=a) \\ = \ P_{r}(\widehat{y}\geq \tau |g_{j} \& y< \tau \&A=a) 3.2.3 Calibration-based Metrics
Calibration-based metrics take the predicted probability, or score, into account.
e.g., Test fairness/ calibration / matching conditional frequencies: P_{r}(\widehat{y}=1|S=s\&g_{i})=P_{r}(\widehat{y}=1|S=s\&g_{j})          Well calibration: P_{r}(\widehat{y}=1|S=s\&g_{i})=P_{r}(\widehat{y}=1|S=s\&g_{j})=s Balance for positive and negative class: 所有组的正类和负类的期望预测分数相等,E(S=s|y=1\&g_{i})=E(S=s|y=1\&g_{j}), \\ \quad E(S=s|y=0\&g_{i})=E(S=s|y=0\&g_{j})          Bayesian Fairness 3.3 Individual and Counterfactual Fairness Metrics
consider the outcome for each participating individual
e.g., Counterfactual Fairness:反事实公平,         Generalized Entropy Index:广义熵系数,considers differences in an individual’s prediction (bi) to the average prediction accuracy (µ),GEI=\frac{1}{n\alpha (\alpha -1)}\sum_{i=1}^{n}[(\frac{b_i}{\mu })^\alpha -1],\ b_i=\widehat{y_i}-y_i+1 \ and \ \mu =\frac{\sum_{i}^{}b_i}{n}         Theil Index:泰尔熵标准,GEI 当α=1时,简化计算方式为GEI=\frac{1}{n}\sum_{i=1}^{n}(\frac{b_i}{\mu })log(\frac{b_i}{\mu })

Section 4 二分类场景下的公平性研究

Blinding the approach of making a classififier “immune” to one or more sensitive variables
Causal Methods A key objective is to uncover causal relationships in the data and fifind dependencies between sensitive and non-sensitive variables. 也用于敏感变量的代理变量的识别,训练数据的去偏。
Sampling and Subgroup Analysis

① 纠正训练数据;② 通过subgroup analysis找到分类器不利的一方

因此,寻求创建公平训练样本的方法在抽样策略中包含进公平的概念。

subgroup analysis 也可用于模型评估,例如分析某一子组是否受歧视,确认某一因素是否影响模型公平性;Statistical hypothesis testing 统计假设检验评价某一模型是否稳健符合公平性指标;通过对敏感变量的抽样,还提出了公平性度量的概率验证,以在某些(小的)置信范围内评估训练过的模型。

Transformation

对数据进行映射或投影以确保公平性。往往部分转换以寻求公平与准确性的trade-off。

虽然转换主要是一种预处理方法,但它也可以在后处理阶段中应用。

Relabelling and Perturbation

作为转换方法的一个子集。

重新标记涉及修改训练数据实例的标签;

Perturbation often aligns with notions of “repairing” some aspect(s) of the data with regard to notions of fairness. Sensitivity analysis explores how various aspects of the feature vector affect a given outcome. 虽然敏感性分析并不是一种提高公平性的方法,但它可以帮助更好地理解关于公平性的不确定性。
Reweighing reweighing为训练数据的实例分配权重,而保持数据本身不变。
Regularization and Constraint Optimisation

当应用于公平性时,正则化方法添加一个或多个惩罚项,以惩罚分类器的歧视性行为。

约束优化方法在模型训练过程中在分类器损失函数中包含公平性项。

Adversarial Learning When applied to applications of fairness in ML, an adversary instead seeks to determine whether the training process is fair, and when not, feedback from the adversary is used to improve the model.
Bandits Bandit approaches frame the fairness problem as a stochastic multi-armed bandit framework, assigning either individuals to arms, or groups of “similar” individuals to arms, and fairness quality as a reward represented as regret. The two main notions of fairness that have emerged from the application of bandits are meritocratic fairness(group agnostic) and subjective fairness(emphasises fairness in each time period t of the bandit framework).
Calibration Calibration is the process of ensuring that the proportion of positive predictions is equal to the proportion of positive examples.
Thresholding

后处理方法。

Thresholding is a post-processing approach which is motivated on the basis that discriminatory decisions are often made close to decision making boundaries because of a decision maker’s bias [157] and that humans apply threshold rules when making decisions.

Section 5 二分类以外场景下的公平性方法

Fair Regression 公平回归的主要目标是最小化一个损失函数l(Y,\widehat{Y}),该函数测量实际值和预测值之间的差异,同时也旨在保证公平性。
Recommender Systems “C-fairness” for fair user/consumer recommendation (user-based) “P-fairness” for fairness of producer recommendation (item-based)

之后会有一篇关于推荐系统公平性的综述要读,可以参考。

Unsupervised Methods 1) fair clustering 2) investigating the presence and detection of discrimination in association rule mining 3) transfer learning 迁移学习
NLP Unintended biases have also been noticed in NLP; these are often gender or race focused.

Section 6 Current Platforms 开源工具

Project Features
AIF360 Set of tools that provides several pre-, in-, and post-processing approaches for binary classifification as well as several pre-implemented datasets that are commonly used in Fairness research
Fairlean Implements several parity-based fairness measures and algorithms for binary classifification and regression as well as a dashboard to visualize disparity in accuracy and parity.
Aequitas Open source bias audit toolkit. Focuses on standard ML metrics and their evaluation for different subgroups of a protective attribute.
Responsibly Provides datasets, metrics, and algorithms to measure and mitigate bias in classifification as well as NLP (bias in word embeddings).
Fairness Tool that provides commonly used fairness metrics (e.g., statistical parity, equalized odds) for R projects.
FairTest Generic framework that provides measures and statistical tests to detect unwanted associations between the output of an algorithm and a sensitive attribute.
Fairness Measures Project that considers quantitative defifinitions of discrimination in classifification and ranking scenarios. Provides datasets, measures, and algorithms (for ranking) that investigate fairness.
Audit AI Implements various statistical signifificance tests to detect discrimination between groups and bias from standard machine learning procedures.
Dataset Nutrition Label Generates qualitative and quantitative measures and descriptions of dataset health to assess the quality of a dataset used for training and building ML models.
ML Fairness Gym Part of Google’s Open AI project, a simulation toolkit to study long-run impacts of ML decisions. Analyzes how algorithms that take fairness into consideration change the underlying data (previous classififications) over time.

Section 7 Concluding Remarks: The Fairness Dilemmas 公平困境

① Balancing the tradeoff between fairness and model performance

② Quantitative notions of fairness permit model optimization, yet cannot balance different notions of fairness, i.e individual vs. group fairness

③ Tensions between fairness, situational, ethical, and sociocultural context, and policy

④ Recent advances to the state of the art have increased the skills gap inhibiting “man-on-the-street”

and industry uptake

相关内容

热门资讯

新版论语原文和翻译 新版论语十则原文和翻译  子曰:由,诲女知之乎?知之为知之,不知为不知,是知也。(《为政》)  孔子...
描写秋天景色的优美段落 描写秋天景色的优美段落  秋姑娘在不知不觉中来到人间,她漫步在麦地,麦子熟了;漫步果园,果实熟了;她...
《山中与裴秀才迪书》原文及翻... 《山中与裴秀才迪书》原文及翻译  《山中与裴秀才迪书》是唐朝诗人王维所作的,本为书信,因其有诗歌美感...
大观楼长联原文及翻译 大观楼长联原文及翻译  导语:五百里滇池,奔来眼底,披襟岸帻,喜茫茫空阔无边。看:东骧神骏,西翥(z...
骆驼祥子的读书笔记 关于骆驼祥子的读书笔记15篇  当看完一本著作后,大家一定对生活有了新的感悟和看法,不妨坐下来好好写...
《答谢中书书》作者陶弘景的生... 《答谢中书书》作者陶弘景的生平简介  引导语:《答谢中书书》是南朝文学家陶弘景写给朋友的一封书信。文...
优美段落摘抄及点评 优美段落摘抄及点评  段落,汉语词语,拼音是,意思是指根据文章或事情的内容、阶段划分的相对独立的部分...
《红楼梦》人名谜语及答案 《红楼梦》人名谜语大全及答案  猜谜语是一种很有趣的.益智游戏。谜面总是简洁形象地把谜底描述出来。以...
岳阳楼记说课稿 岳阳楼记说课稿8篇  作为一位无私奉献的人民教师,就不得不需要编写说课稿,说课稿是进行说课准备的文稿...
老马识途文言文阅读 老马识途文言文阅读  文言文阅读  老马识途  管仲、隰朋从桓公伐孤竹,春往冬反,迷惑失道。管仲曰:...
《师说》文言文原文及赏析 《师说》文言文原文及赏析  《师说》作者是唐朝文学家韩愈。其全文如下:  【前言】  《师说》作于唐...
跨文化视角下西方文学作品的鉴... 跨文化视角下西方文学作品的鉴赏和语文翻译  【摘 要】基于跨文化的视角,简要地对西方文学作品进行了赏...
《易经》第十卦·履卦 《易经》第十卦·履卦  《周易》,又称《易经》,简称《易》。易经分两部分,一为“经”为伏羲氏和周文王...
《红楼梦》的十二钗 《红楼梦》的十二钗  梦阮先生潦居悼红轩作《红楼梦》,佳作未成身先葬,可怜侯门梦难成。  初读《红楼...
韩愈《劝学篇》全文 韩愈《劝学篇》全文  《进学解》是元和七、八年间韩愈任国子博士时所作,是韩愈所创作的一篇关于劝学的文...
李鳝《蕉石图》鉴赏 李鳝《蕉石图》鉴赏  李鳝出身富豪之家,早年曾以画人内廷供奉,后来则转任山东滕县知县。李鳝画《蕉石图...
张俭传文言文阅读及答案 张俭传文言文阅读及答案  张俭字元节,山阳高平人,赵王张耳之后也,父成,江夏太守。俭初举茂才,以刺史...
心经全文注音朗诵 心经全文注音朗诵  导语:凡人要度苦厄,了生死,成大觉,非从自心下手不可。以下小编为大家介绍心经全文...
范元琰良善文言文翻译 范元琰良善文言文翻译  范元琰,字伯珪,吴郡钱塘人也。小编整理的范元琰良善文言文翻译,欢迎大家前来查...
韩非子权术全文译文 韩非子权术全文译文  引导语:《韩非子权术》这篇课文想必很多人都读过,那么相关的韩非子权术的全文以及...