Since the deployment of the internet in the mid 1990’s, but even more rapidly since the development and quick widespread use of smartphones more than 10 years ago, many of the interactions that previously happened in a physical context moved to the digital realm. The expansion and use of all the applications that this enabled fed an increasing amount of data in the hands of corporations (and governments). The analysis of this digital footprint allows organizations to deepen the knowledge they have on their customers, and the transformation of these data into value back in the hands of those customers.
In the 2020’s citizens recognize the positive role that data and AI are playing to stabilize the economic scenario during COVID-19 health and economic crisis, either through direct actions to alleviate individuals and businesses burdens, or as key intermediaries of public aid. However, in this decade new elements appear on the horizon: one of them is the emergence of many technological innovations based on the use of data and the development of solutions based on artificial intelligence, that are called to revolutionize the way businesses are done throughout all sectors of activity, finance among them.
The last milestone in this technical progress ladder is the development of AI systems that automatically arise patterns hidden in that vast amount of data. Although some methodologies among this set of analytical techniques were first introduced more than 50 years ago, it is now, thanks to the data explosion, that they are becoming pervasive.
But together with the positive applications and opportunities enclosed in these innovative technologies, there are some risks that are materializing. Some examples of questionable applications in the use of data and AI that have had strong impact in the public debate are:
- Cambridge Analytica case (2016), several lawsuits in different countries for misuse of personal data and manipulation of public opinion, the largest being the one brought by the Federal Trade Commission against Facebook.
- Case of Amazon’s personnel selection algorithm that discriminated against women (2018)-the company itself deactivated it.
- Case of the creditworthiness algorithm (by Goldman Sachs) that granted different credit limits to different groups of Apple Card users (2019)- altough the NY state financial supervisor did not see any evidence of intentional discrimination against women, some degree of reputational harm had already taken place. If people think that a company’s data practices are unethical, its reputation will be damaged no matter how much it proves that it complies with the law. That’s why brands should define and follow a code of ethics.
- Spanish retail company Mercadona case on security surveillance by artificial vision of spaces accessible to the public (still without ruling by AEPD). AI for surveillance and citizens identification in public accessible spaces is precisely one of the highest risk applications of AI according to the European Commission proposal of AI Act.
- Case of algorithmic evaluation of students in their access to university in Great Britain, June 2020: faced with the impossibility of holding entrance exams due to the COVID context, an algorithm assigned marks to candidates according to their academic background and context information, with many people affected by these decisions (which were revoked).
Behind some of them there was no bad intention initially, but undesired outcomes were reached due to a lack of reflection and control over key design decisions. In any case, what all of them have demonstrated is the importance of making a responsible use of data. Besides, as a reaction to public demand, the regulators in some circumscriptions have adapted the legal framework to this new reality and citizen’s concerns.
- In April, 2016 the European Union adopted the General Data Protection Regulation (it entered into force in May, 2018)
- In April, 2021 the European Commission made public an AI Act proposal that is going to be debated in the European Parliament.
But, regardless of the geography where a company develops and offers its services, what is clear is that some self regulatory approach to reliable data and AI use is required to enhance customers’ trust in digital services.
An important factor to be taken into account is that data and AI applications have very different potential impacts when things go wrong, and thus, risk management actions must be focused accordingly.
As part of the ideation and development process of any new solution that makes use of data some basic questions should be raised:
- Do we have the right to gather and use the data required to run the initiative? Is the person that generated that data aware of the use we are going to make? have we explained how it will affect him/her?
- Given that this new solution runs properly…. how will it affect people’s lives?
- If it fails at some point… what could be the effects of a misbehaviour of the system? what controls are we going to put in place to alert us if that happens? would we be able to explain where the failure was, in order to correct it and remain liable?
The exercise of finding and recording answers to all those basic issues would be an interesting exercise to set a common ground between diverse teams in their respective roles, developers and shapers: data scientists, business experts, data protection officers and legal services, security experts and corporate social responsibility advisors.
It is key to frame these and other issues, in order to incentivize the necessary internal debate within corporations that should guide them towards the desired end point in which reputational and operational risks are mitigated.
As it happens with every innovation, the binomy Data & Artificial Intelligence is two-sided: there are huge opportunities embodied in its use, but, what are the risks associated with the applications of these new technologies? What tools and processes can we develop to control those risks as a responsible corporation? Counting with clear ethical guidelines on data management and AI systems governance framework is the foundation upon which to build a sustainable future.
An entity such relevant as IIF describes ethics as “a system of moral principles governing an actor’s behavior or the conduct of an activity. In the case of the financial institutions, ethics bridges the gap between regulated and non-regulated spaces — that is, what can be done vs. what should be done”. In this definition the subject is as broad as anyone can imagine; in the case of a corporation, every stakeholder that interacts with it in any geography (and legal framework) should be treated according to those “moral principles”: individuals, families, entrepreneurs, business owners, startups, private and public corporations, as well as its own employees. All of them are leaving a digital footprint in IT systems that must be used to enhance trust on a company, and not to erode it.
Objectives to be achieved in the ethical development of AI
1| Proper access to data, and secure data management
This aim points to the need of establishing the adequate means to gather data, keeping our customers informed about what information do we record in our interactions with them, what do we use those data for, and how could they exercise their right to control all those aspects.
1.1| Privacy by design
Trustworthy data gathering and use, and proper consent management by the data subject are aspects already regulated in the legal framework about privacy, being the European General data Protection Regulation a global referent in this sense, which also appointed the figure of Data Protection Officer and tools like the Privacy Impact Assessment for every data-based initiative.
From a technical point of view a reliable data lifecycle management also embraces aspects like data quality and safety standards implementation.
1.2| Specific actions towards privacy risk mitigations
The main challenge in this sense is to keep a coherent position throughout all those geographies where a company carries out its activity, given the heterogeneity of the existing legal frames, and taking into account some trade-offs between corporate homogeneous and coherent behavior versus business competitiveness goals. This dilemma also applies to many of the following points.
IIF addresses the concept as follows: “Fairness — a hotly debated term in the global public policy discourse — is defined here as a principle that understands and minimizes discrimination based on legal protected characteristics that aligns with a firms’ ethical values and considers the dynamics and cultural variations of ethical codes.”
Being more specific, we can define fairness as the goal of not treating someone differently according to a given set of features, which we consider discriminative. For example, according to EU Charter of fundamental rights we have the following list of 15 features on its article 21:
“Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.”
In most legal frameworks a majority of those features are considered sensitive data, and it is even forbidden to record and process them, as stated in GDPR art. 9, as an example. This fact impedes model validation based on a ground truth, adding technical challenges to fairness assurance measures. In most cases all analysts can do is to measure the balance of those 7 features and its proxies within a group that is given a specific treatment. However, in April 2021 the European Commission made public an AI Act Proposal which article 10.5 states that these sensitive data could be gathered and processed in the future for the purposes of ensuring bias monitoring, detection and correction in relation to high-risk AI systems.
To go deeper into these issues we must realize that there are two kinds of issues that can harm fairness. Although they are often referred to under the same term as “biases”, they are of very different nature. We will review them below.
2.1| Data quality, data representativeness and accuracy
Issues that can drive to a bad algorithmic prediction and poor accuracy metrics are often related to poor data quality or lack of representativeness in the training dataset of the group we do not want to discriminate against. For example: elderly people tend to leave a very low digital footprint to be analyzed, which means they are “invisible” to our models. This under representation in training datasets can drive to poor outcomes when assessing their creditworthiness.
An additional effect is the false positive or false negative cases of individuals that are assigned with the same characteristics as the group they belong to, when actually they are outliers within their class. As data are just a faint and partial shadow of the reality they represent, classification in big groups usually fails at some point, and the more granular a taxonomy is, the more accurate the outcome (if we count with enough data to describe fine-grained groups).
2.2| Social Biases
The aforementioned problems were caused by a poor technical performance or lack of enough granular descriptive capacity of the data. Now we are going to address a completely different issue: the case when the data and the algorithms work perfectly well, but it is the reality behind the data what is very uncomfortable from an ethical point of view. Our socio-economic system is far from being perfect, there are many inequalities in it, and data are just the projection of that reality. Models can find those facts and build decisions on them, for instance: spelling mistakes on credit application forms can objectively correlate with higher default rates. Women have lower wages than men on average, and thus can receive a worse financial scoring. What do we do with those imperfections, as they are not analytical errors? Those are decisions that should not be left to the data scientist criteria and liability. Responsible business criteria should be defined case by case, searching a balance between not leaving anyone behind and our profitability objectives. What is in the hands of data scientists is to raise their hands when any of these situations are detected, in order to make a quantified trade-off in the context of the Ethics Board or SteerCo described in the Governance chapter of this white paper.
2.3| Specific actions towards bias risk mitigation
The first family of issues must be addressed through technical solutions:
- Proper data governance is being carried out to avoid bad data quality and/or low representativeness. Quality enhancement processes must act upstream and propagate these efforts downstreams to every application making use of data.
- Additional sensors could be deployed to gather extra information and avoid algorithms blindness: we could ask customers and employees for sensitive information if they are going to be used just to test precisely that they do not play a discriminative role.
- Technical guidelines on fairness should be developed, followed by training programs
Regarding social biases:
- Strategic criteria should be defined: corporations can play a role to enhance many situations of inequality (for example, through affirmative action politics), but there can arise conflicts with business units incentives. Case by case trade-offs must be carried out, and these are not technical decisions but business and CSR areas should set guidelines.
- Establish a fluent multidisciplinary approach to set ethical criteria: we cannot leave to the analytical teams the weight of those impactful decisions and their accountability without helping them with clear guidance. Further reflection on how to integrate new controls in the frame of the end to end development process of AI systems must be carried out, as they enclose potential operational and reputational problems.
3| Algorithmic transparency
3.1| AI interpretability vs AI explainability
A necessary ground before going further is to set these definitions and differences, that according to EBA are:
A model is explainable when it is possible to generate explanations that allow humans to understand (i) how a result is reached or (ii) on what grounds the result is based (similar to a justification).
In the first case (i), the model is interpretable, since the internal behaviour (representing how the result is reached) can be directly understood by a human. To be directly understandable, the algorithm should therefore have a low level of complexity and the model should be relatively simple.
In the second case (ii), techniques exist to provide explanations (justifications) for the main factors that led to the output. For example, one of the simplest explanations consists in identifying the importance of each input variable (feature) in contributing to the result, which can be done via a feature ranking report (feature importance analysis).
It might seem counterintuitive, but through the most advanced machine learning techniques data scientists do not “program” specific rules to be implemented; instead they try different algorithms trained with a set of data, and calibrate those algorithms to reach the fittest model that achieves success given an objective and a ground truth of that objective. This approach often implies an improvement in terms of accuracy compared with traditional methods, but brings a lack of transparency in the inner operations of the model, as it is very well described in this report.
However, to understand how a model works is an issue that must be addressed at different levels and towards different stakeholders. Their developers are the first ones that must have vision on this, but, of course, customers and supervisors will require some degree of information about what drives an algorithm to provide a given outcome. How can we achieve those explanations, and what is the proper level of transparency we must provide to each stakeholder without revealing our know-how and IP?
3.2| Specific actions towards lack of interpretability risk mitigation
- Technical guidance from a practical approach.
- Practical training could be carried out in the frame of teams reskilling initiatives.
- Tools to enhance global or local explainability could be integrated in your analytical platforms.
4| Human Control and accountability
4.1| Different levels of automation
An AI system can be designed to perform under very different levels of automation that vary from a required human intervention on every stage, to the fully automated system operation. This decision has nothing to do with the analytical development of the model underneath the system, and is a design decision of the system as a whole.
An example of highly automated AI systems in financial applications are high-frequency algorithmic trading systems. To prevent undesired events like flash crash, early alerts and panic button features are integrated as part of the systems, as regulated by MiFID II, in order humans can be aware and retake control in case of algorithmic instability.
However, not being the automation degree an analytical decision but a solutions design one, every model supporting an automated solution should be labeled as such in your corporation’s model inventory, in order to properly identify all those models affected by GDPR art. 22 (if in EU).
4.2| Feedback loops and adaptive algorithms: control of self-adapting systems
Traditional analytical models are manually recalibrated every certain period of time. For example, regressions used for risk scoring purposes are reviewed every month, quarter or year, depending on business objectives and regulatory requirements.
Nevertheless, advanced machine learning algorithms not only find automatically patterns in the data by themselves under very few human instructions, but can also recalibrate their operations to adapt themselves to changes in the input data, if a feedback loop is implemented. An example of this are reinforcement learnings models. This can cause models to deviate from its initial purpose, depending on the data they receive through feedback loops, and opens the door either to unintended failures, or even to intentional attacks known as data poisoning.
4.3| Specific actions towards automantion risk mitigation
Algorithmic quality control tests must be carried out not only as a part of DevOps human validation processes before deployment, but as an automated safety check of operational AI systems continuous monitoring. An additional safety measure is the development of alerts and contingency plans in case of an algorithmic failure or deviation. Not only design criteria
Governance of ethical issues
As we have seen, together with great opportunities Artificial Intelligence brings new risks to be mitigated. An excessive focus on those risks might restrain innovation, but, on the other hand ignoring those risks -even in the early stages of AI implementation, when they seem unlikely and the attention is set on solutions delivery and value demonstration- could create the conditions for a greater negative impact in the future.
To achieve proper risk mitigation two approaches can be followed:
- The creation of additional control layers and committees, parallel to the existing ones, with specific focus on the aforementioned risks.
- Leverage the existing control processes, enriching their checks with additional requirements to address fairness, transparency and automation issues.
From my point of view the second approach avoids the proliferation of different processes and simplifies the end to end process towards agile solutions delivery, while improving the current situation from a risk control perspective. However, it could be necessary to establish an Ethics Board or SteerCo to solve dilemmas and set criteria around these issues.
To carry out an effective risk mitigation and enhance customers’ trust on how their data are used and what kind of AI systems these data are being used for, a coherent frame is required to act on two tracks: business and design decisions, and the developments on the technical track that are the enablers for business aims.
This proposed framework includes actions to activate what we can call Ethical AI by design paradigm.