Automating fair decisions: open dilemmas (2nd part)

Juan Murillo Arias
3 min readNov 1, 2019

--

In a previous article I highlighted how, due to the progressive pervasion of automated decision taking systems, concerns are arising about the fairness of those AI-based systems’ outcomes (as if human decisions were formerly perfectly objective and free of biases :-).

Data scientist face two big issues when developing algorithms addressed to automate processes with impact on people’s lives (accessing to basic rights such as education, employment, healthcare, or financial services) to avoid risks around fairness:

1.-Find a definition of what is fair and what unfair.

2.-Implement that criteria in their models to avoid unfair discrimination.

The first issue needs to be supported by a solid definition of what is discrimination:

Given two persons with n atributes, P1(x1, x2, …xn) and P2(x’1, x’2, …x’n), if all those attributes coincide except by one, xm≠x’m, and the decision system provides a different output for P1 than the output provided for P2, then the variable xm is acting as a discriminator. For example: given two persons with the same wages, same level of studies, working in the same sector, same capital and same liquidity, but different kind of labour contract, when asking for a loan different answers can be provided to each of them; as the person with a temporal contract could have a higher default risk than the person with a permanent contract, the first one will be offered a higher interest loan. The variable xm=kind_of_labour_contract is acting as a discriminator, but a lawful one. If we choose a different example: two persons with the same college degree and marks, same languages knowledge, same working experience but different gender, they should not be treated differently in a recruiting process by that gender difference, we all know that this would be unfair.

So, a closed list of variables which use as discriminators is considered illicit is what we will need to differenciate between lawful and unlawful discrimination, and this is not a technical issue but an ethical and even political matter: countries in the same cultural context have reached a consensus about that. For example the European Chart of Human Rights dedicates its 21st article to this issue.

The second issue is how to program and train ML models that avoid unfair outputs. The direct use of those forbidden variables is the easiest way to fight against discrimination, but that doesn’t avoid completely the risk of dicrimination because many other variables (that in principle were not associated with the kind of sensitive attributes we want to avoid) remain active in our datasets. For example: place of residence might correlate with a sensitive attribute like race in very segregated urban environments, phone location tracks that can tell us about citizens leisure patterns, and they might include visits to gay bars, a sensitive attribute that we would prefer not to use to take any decision. The problem is that, even when the human data scientist wants to remain agnostic towards those correlations, the model can build them by itself. The solution is not being blind about those spurious correlations, but, on the contrary, try to identify them to take them out of the decision process. To achieve that neutrality we face two challenges:

A] In order to check that our automated system is not basing its decisions in variables that we don’t want to be used, we should count with that information precisely to check that the process is independent of them. If we want to avoid that vulnerable social groups are not being isolated and provided with discriminatory answers we have to check that the social clusters that our model has created are diverse regarding the sensitive atttributes… but if we can’t even record those sensitive atributes due to legal restrictrions AI auditing gets much more difficult.

B] If we are using models that are hardly explainable, then we will be blind about those spurious correlations that we want to avoid (a recent example here). So, efforts on algorithmic transparency must be made in order to check what items in the input data have had a given weight on the output decision of the model.

As a conclusion, further advances in both dymension must be reached: the ethical criteria and definitions must be more specific, and technical methodologies must be developed to implement those criteria.

--

--

Juan Murillo Arias

MSc Civil Engineer by UPM, Executive MBA by EOI. Experience as urban planner and project manager in technological innovation and smart cities initiatives.