Automating fair decisions: open dilemmas

Juan Murillo Arias
3 min readOct 14, 2018

--

Ethic dilemmas on fairness affect everyone: from individuals to corporations and governments. Where are the limits between fair discriminant decisions for the sake of free market, and illegal discriminative ones? are they clear or subtle? At the end, nobody should be treated differently due to a feature that that he or she can’t change. Shouldn’t companies recruit more philosophers and lawyers to provide answers, train their teams and avoid legal and reputational risks?

In his masterclasses Michael J. Sandel asks his students things like: Is it bad that retailers increase basic item prices such as bottled water or fresh food after a natural disaster like an earthquake or a hurricane? Is it fair that Uber dynamic pricing algorithm increases transport prices during a hard rain night? How about setting physical requisites like beauty, youth and fit body to work in Abercrombie? Does affirmative action drive to an optimum resources allocation (either in access to education, to financial services, or to labor markets), how long should it be applied? will equality ever be reached?

So far, all the previous decisions were taken by humans, and even when those decisions where assisted by computer models, they where consciously programmed based on rules set by human decision-makers. However, in the era of machine learning we are no longer programming, but teaching models that learn from huge amounts of data, sometimes biased shadows of the reality. If some dilemmas already had hard answers in the era of human rules, which is the scenario we face when using artificial intelligence to create all kind of scorings based on our digital footprint? If we are driving to a future when access to all kind of services will be determined by machine learning models, three big questions must be solved:

Reliability, which has to do with the amount of risk we want to assume. That risk must be measured through confusion matrixes, and can be very variable depending on the kind of service we provide. Tough regrettable, false positives bring only reputational implications when providing leisure recommendations or social network services; consequences are much more sever when deciding the future of a student or an entrepreneur’s credit access, and absolutely severe in the case of health diagnosis. Cathy O’Neill made a very descriptive compilation of examples in her book, Weapons of Math Destruction.

Interpretability, when training a deep learning model, understanding what variables have been critical for the output is a challenge by itself. That’s why GDPR has guaranteed EU citizens right to avoid completely automated decisions, and keep them under the control of humans (which has to do also with responsibility track issues). But the human using the model to back decision has the right to know how it does operate. Huge research efforts are being made in this field.

Fairness assurance in automated models programming (or training) requires to define what is fair first, and that brings us to the first point of this reflection: data scientist, computer engineers, designers and behavioral economics experts must work together with philosophers, lawyers, psychologists, and sociologists in multidisciplinary teams to achieve and deploy technological services that avoid unjust results. Relevant researchers such as DJ Patil point to the need of an ethical oath for data science, and public institutions such as the European Union, or even private corporations such as McKinsey lead debates around this issues.

It is time to go from reflection to action by implementing measures to make artificial intelligence a useful tool to shape more equal societies in the coming years, this will be the right direction to avoid dystopian scenarios.

--

--

Juan Murillo Arias

MSc Civil Engineer by UPM, Executive MBA by EOI. Experience as urban planner and project manager in technological innovation and smart cities initiatives.