About the need for a definition of fairness

Background

The keys to the European Union’s new regulation on Artificial Intelligence are to ensure traceability and responsibilities in the development of systems that make use of AI; to guarantee their transparency; and to ensure that they do not amplify or perpetuate biases in order to avoid discrimination of citizens subjected to algorithmic decisions.

However, this last objective, laudable in itself, faces many, many pitfalls along the way, starting with the lack of definition of its basic concepts. Of course, the regulation does not have to go into the details of the “hows” (that is not its purpose), but it should set the playing field through guidelines that can subsequently be applied to the development of specific cases, which is why it is surprising that Article 3 of the legislative proposal — dedicated to glossing the definitions that subsequently facilitate understanding of the text — does not address the concepts of “bias”, “fairness”, or distinguish between lawful and unfair discrimination.

In view of this gap, the following reflections require further development:

Non discrimination

We can define fairness as the objective of not treating someone differently based on a certain set of characteristics that the legal framework considers discriminatory. Here we would have to refer to national constitutions, but as these are disparate, the common framework is the Charter of Fundamental Rights of the European Union, which in its Article 21 notes:

In other words: given two individuals who are equal in their personal features except by differing in any of the 16 listed above, they both must receive the same treatment when accessing to essential services. This must be taken into account by the providers of those services (either in the public or private sectors), and regardless the gatekeeper of this access is a human employee or an AI-based system.

Elements that can harm fairness

To go deeper into these issues we must realize that there are several kinds of issues that can harm fairness:

1] The first kind of issue is the direct use of a discriminative feature present in a dataset to train a model, when it is forbidden to act as a discriminator (sex in this case). The solution to fight this risk is obviously to avoid the record and use of those specific features listed by anti-discrimination laws, which besides tend to be sensitive personal data under strong restrictions.

2] The second kind of issue is when the discriminative feature is absent: it is not a field in the dataset, so it cannot be directly used, but there are proxy features highly correlated with the forbidden feature, which a machine learning algorithm can infer, creating clusters of customers driven by that feature. For example, it has been shown that geographical features, such as the zip code of residence, could be a proxy of ethnicity in some segregated urban environments. The solution to mitigate this risk is to make a cross validation of the variables that a model is using with aggregated sensitive features if available (e.g. statistical population breakdown in a neighborhood), something often very hard to achieve, as these correlations are hidden in most cases.

To add an additional complexity layer, either we make a direct (1) or indirect use of the sensitive feature (2) we can face other kinds of problems often referred to under the same term as “biases”. However biases can be of very different nature. We will review them below.

A] Biases embedded in poor data quality, low data representativeness are issues that can drive a bad algorithmic prediction. Poor accuracy metrics are often related to a lack of descriptive capacity or lack of representativeness in the training dataset of the group we do not want to discriminate against. For example: elderly people tend to leave a very low digital footprint to be analyzed, which means they are “invisible” to machine learning models.

This under-representation in training datasets of a given population cohort can drive to poor outcomes when assessing their creditworthiness (or in the context of other decision-taking processes, such as the case of Amazon’s personnel selection algorithm that discriminated against women (2018)-the company itself deactivated it). The solution to this problem is to check whether your training dataset is well balanced with regard to the general population distribution regarding the representativeness of social cohorts according to features we do not want to use to discriminate (age, gender, etc.). Besides, if it is detected that a specific cohort is underrepresented in the training dataset, this should be enhanced with additional data sources that, this time, could be obtained not just by observation of the sampling population that is studied (e.g. internet users digital footprint where elderly people could be underrepresented, or Big Techs employees data where women could be absent), but diversifying the focus and targeting additional surveys on the sensitive cohort. Big Data puts huge effort on inference, but sometimes direct questions could be targeted to get a more accurate vision on blind points.

An additional effect is the false positive or false negative cases of individuals that are assigned with the same characteristics as the group they belong to, when actually they are outliers within their class, and this generalization or tendency to the average carried out by the analytical modelo harms them. As data are just a faint and partial shadow of the reality they represent, classification in big groups usually fails at some point, and the more granular a taxonomy is, the more accurate the outcome (if we count with enough data to describe fine-grained groups). This was the case of algorithmic evaluation of students in their access to university in Great Britain, June 2020: faced with the impossibility of holding entrance exams due to the COVID context, an algorithm assigned marks to candidates according to their academic background and context information, with many people affected by these decisions (which were revoked). In parallel, mediocre students in traditionally high-scoring neighborhoods could have been unfairly benefited by this method.

The solution is obviously to avoid generalizations and not assign an individual the features of the group he/she is supposed to belong to, running more fine-grained analysis. Methodological enhancement of a model will also enhance its accuracy and will reduce the relative weight of false positive and false negative cases.

B] Social Biases. Beyond the aforementioned problems caused by a poor technical performance or lack of enough granular descriptive capacity of the data, now we are going to address a completely different issue: the case when the data and the algorithms work perfectly well, but it is the reality behind the data what is very uncomfortable from an ethical point of view. Our socio-economic system is far from being perfect, there are many inequalities in it, and data is just the projection of that reality into an embedding space. Models can find those facts and build decisions on them, for instance: spelling mistakes on credit application forms can objectively correlate with higher default rates. Women have lower wages than men on average, and thus can receive a worse financial scoring. What do we do with those imperfections, as they are not analytical errors? Those are decisions that should not be left to the data scientist criteria and liability. Responsible business criteria should be defined case by case, searching for a balance between not leaving anyone behind and our profitability objectives. What is in the hands of data scientists is to raise their hands when any of these situations are detected.

The solution for this last effect touches the field of politics. In some cases, such as the access of minorities to education in the US, affirmative action has been forced to mitigate the fact that, on average, scores of the vulnerable population are historically and objectively lower than expected.

The following table summarizes the combinations of cases we can face with regard to algorithmic discrimination cases:

This frame can be applied to any fairness assessment in relation with any feature listed in the mentioned European Charter of Fundamental Rights.

Implications on a specific example

From a judicial standpoint, the implications of a direct use of a forbidden variable are much more serious than in the case of an indirect use.

Not in the EU jurisdiction, but in USA, we have the recent case of Apple Card. Following a cross-diversification strategy, in August 2019 Apple launched a new product: a credit card in partnership with Goldman Sachs (given that credit provision activity is under sectoral regulation, working together with a bank was a requirement).

As a matter of fact, Apple has historically taken privacy and rigor on the use of its customer’s data as a differential competitive advantage. Thus, Apple Card introduced innovations such as the dynamic CCV, and identity verification through Apple devices features, such as Face ID or Touch ID.

Customers were thought to be at the center of this innovative mean of payment. However, in November, 2019 David Heinemeier Hansson, the prominent founder and CTO of Basecamp, publicly shared on his 436,000 followers twitter account that he and his wife had been given different credit limits for their Apple Card, behind which lied the same Goldman Sachs’ bank account. More specifically his credit limit was 20 times higher than the one assigned to his wife, what pointed to a probable case of gender discrimination.

The impact rose so high that it soon attracted the attention of New York’s State financial regulator, and media such as The Washington Post, Business Insider, CNN, BBC or The New York Times finally became involved, echoing the story.

A huge reputational problem for both companies -Apple and Goldman Sachs- was threatening to evolve and derive into a penalty by government supervisors: the case entered formal scrutiny by New York State Department of Financial Services (NYDFS) on November, 2019; a process that lasted until March, 2021.

For the case at hand the law that applies is the United States Code,

Title 15 — Commerce And Trade

Chapter 41 — Consumer Credit Protection

Subchapter IV — Equal Credit Opportunity Act, which states as follows:

(a) Activities constituting discrimination

“It shall be unlawful for any creditor to discriminate against any applicant, with respect to any aspect of a credit transaction on the basis of race, color, religion, national origin, sex or marital status, or age (provided the applicant has the capacity to contract)”

However, it was demonstrated through the regulator’s intervention that no direct use of sex by the algorithm was made. So cases 1A and 1B can be discharged, and thus intentional discrimination, as New York State Department of Financial Services (NYDFS) stated on its final report about this case, on March 2021:

“The Department’s exhaustive review of documentation and data provided by the Bank and Apple, along with numerous interviews of consumers who complained of possible discrimination, did not produce evidence of deliberate or disparate impact discrimination but showed deficiencies in customer service and transparency, which the Bank and Apple have since taken steps to remedy. Additionally, this report addresses issues illuminated by the public discussion of the allegations, including misconceptions about spousal “shared finances” and authorized users, how to build credit history, the need for increased transparency in credit decisions, bias inherent in credit scoring, and the risks and benefits of using artificial intelligence in credit decision-making

Finally, the following facts are very important to decide whether we face a 2A or 2B case, and guess what mitigation actions could have been taken into account.

  1. As we have said, there was no direct use of sex feature by the algorithm
  2. The main driver to provide dispare credit limits to customers was the role of account holder versus authorized user.
  3. Account holders were given much higher credit limits than authorized users based on empirically derived credit analysis based on statistically sound data and time series that demonstrated that the solvency of the latter was lower than that of the former
  4. No data on this were made public, but it is quite probable that the gender composition of both roles -account holders and authorized users- was imbalanced.

Given all these factors, we can say that this is a 2B class case: a case in which no direct intention of discrimination was hold by the developers of the algorithm taking creditworthiness decisions at Goldmand Sachs side, but also a case were a discrimination was being produced indirectly given that the field “account holders/authorized users” was a proxy of gender as a result of an uneven distribution of men and women between these groups, a fact that is a legacy of a society in which men with economic autonomy still outnumber women.

Although thanks to the lack of intention (or direct use of a forbidden data) a fine was avoided by the partnership of Apple and Goldman Sachs, the reputational harm was already done.

Mitigating actions

Learnings on how things happened in this example can be very useful to avoid future similar cases.

The first lesson learned is that creditworthiness algorithmic assessment is an AI application with a high impact on people’s lives. Changing jurisdiction from the USA to the EU, on the proposal of AI Act presented last April 2021 to the Europarliament by the European Commission this use case is labeled as high risk by its implications on fundamental rights:

The second lesson is the need of properly document every step taken in the development on any high-risk AI system:

  • What data have been used to train a model, and under what terms were they gathered, in order to make a legitimate use of them
  • What algorithms were tried and finally selected to implement the final model
  • What checks were made over the outcomes of the model, in order to carry out a proper quality assurance before the final deployment towards real users.
  • What controls are in place to guarantee that the model is performing without deviations
  • What channels do customer have available to provide feedback about errors and misbehaviour of the model (better that twitter)

Last but not least, it is a key issue to keep in mind that removing a sensitive variable from the input does not eliminate potential discrimination risks through, for example, proxy variables, as this article explains. A very dangerous (but extended) misconception is that being blind to sensitive features we do not want to use to discriminate leaves us within the safety zone. On the contrary: sensitive data could be gathered precisely in order to use them not to train the model, but in quality control checks, to measure how customer segments in the outcome are balanced regarding those features.

In fact, the EU Ai Act draft states the following in its article 10.5:

“To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679 (aka GDPR: data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation), subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued”

Conclusions

There is a gap between “perceived discrimination” and “factual discrimination”, being the second one defined by the current legal framework, and the first one by society and reputational players (media, political debate, etc.).

Corporations that work with data and develop AI solutions of course must comply the local legal framework where they deploy and operate their solutions, but even when achieving that legal compliance, there are risks on the field of perceived discrimination that must be addressed in coherence with the ethical positioning and values of every given corporation.

The questions “what CAN we do with our customers’ data?” and “what SHOULD we do with our customers’ data?” can be very different, evenmore when some data-driven decisions affect people’s lives.

That gap between both questions and their subsequent answers must be addressed through a solid, written, and known-by-all-employees Ethical AI by design guidelines.

Last, but not least: if data, that are but shades of an uncomfortable and imperfect reality, would tell us that collective A should receive a different treatment than collective B from a data-driven perspective (e.g. creditworthiness), how should a company behave?

From my personal point of view the best option is to develop models that are blind to the features that separate collective A from collective B, and thus provide them with equal treatment. But this comes with a cost, and it would imply to recognize that the Milton Friedman doctrine, which stated that “the goal of any firm is to maximize returns to shareholders”, is obsolete: companies that activate affirmative action can play a very active role on leveling social inequalities. This research by MIT Media Lab analyzed how to achieve this with a minimum cost.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Juan Murillo Arias

Juan Murillo Arias

MSc Civil Engineer by UPM, Executive MBA by EOI. Experience as urban planner and project manager in technological innovation and smart cities initiatives.