Accountability for Algorithms: A Balancing Act for Governments

Consumers all over the world may not know it, but many decisions about and for them are increasingly made by algorithms. Eighty percent of viewing activity on Netflix is driven by recommendations from algorithms; a doctor might recommend a course of treatment based on an algorithm that uses patient DNA; social protection programs might determine eligibility for an assistance payment using an algorithm that crunches your mobile phone transactions; and financial providers can determine your creditworthiness or insurability based on algorithms that leverage alternative data.

A persistent challenge in reaching low-income and vulnerable populations with financial services has been a limited availability of data. In recent years the growth of consumer data trails in emerging markets, largely generated by mobile phones, has helped change the equation alongside powerful new analytical tools. But what data goes into consequential algorithms, how it is used, and whether it leads to inclusion — or bias and exclusion — is largely a black box to consumers, as well as governments. While the expansion in consumer data and companies leveraging artificial intelligence (AI) technology can advance financial inclusion, these technologies introduce new risks and may reinforce historical patterns of bias and exclusion, with potentially harmful consequences for consumers.

These technologies introduce new risks and may reinforce historical patterns of bias and exclusion, with potentially harmful consequences for consumers.

Central banks and policymakers have long faced the challenge of balancing financial innovation, market growth, and inclusion alongside competition, stability, and consumer safety. That balancing act has been made more difficult by the speed and scope of financial innovation, technology, and algorithms of increasing sophistication. In emerging markets and developing countries in particular, the increased use of mobile phone data for consequential decisions has raised the profile and potential risks of these tools.

There is a growing recognition that governments require ways to assess consumer risks and opportunities in the deployment of these algorithms and AI systems, as well as methods to amplify the opportunities and mitigate risks — all without quashing innovation. But, how?

Earlier this month, H.M. Queen Máxima of the Netherlands, the United Nations Secretary-General’s Special Advocate for Inclusive Finance for Development (UNSGSA), co-hosted a virtual workshop with the Center for Financial Inclusion (CFI) to raise awareness and address algorithm biases for financial inclusion. Queen Máxima and CFI Managing Director Mayada El-Zoghbi co-chaired the meeting, which convened leadership from select central banks and other subject matter experts. The workshop examined how to identify and assess these emerging risks, discussed existing risk mitigation tools, and identified opportunities for further research.

The challenge of bias in financial services is not new and many countries already have financial sector anti-discrimination laws that hold financial institutions to fair lending practices, including just under half of emerging markets. The Equal Credit Opportunity Act in the United States, which was passed in 1974, for instance, makes credit discrimination based on protected classes illegal. But it runs into limits in the face of algorithmic lending as it also restricts lenders from collecting demographic information on borrowers. Given the current methods for testing algorithms, this missing demographic data makes disparate impact testing much harder. While the Equal Credit Opportunity Act and similar antidiscrimination laws could be used to hold companies accountable, they will grapple with understandable privacy concerns.

A second potential avenue for accountability could be through data protection laws, the most well-known being the European Union’s General Data Protection Regulation (GDPR).  While GDPR requires that processors assess any high-risk data system, like a new credit scoring algorithm, it does not amount to an audit. GDPR and the laws it has inspired also enumerate individual rights for consumers including the right to an explanation of an automated decision and the right to have your data rectified by a provider if it is incorrect. But we worry, how likely is it that low-income consumers (or consumers writ-large, really) will be aware enough to be empowered by these rights?  If consumers are not aware of how their data is being used, they are less likely to exercise their rights, raise their collective voice, and seek accountability for algorithmic decisions.

If consumers are not aware of how their data is being used, they are less likely to exercise their rights, raise their collective voice, and seek accountability for algorithmic decisions.

In recognition of the limits of existing approaches, some countries have drafted national AI policies or frameworks. These policies work to establish the level of risk for discrete AI systems and how to assign levels of supervision. For instance, the EU introduced a draft law that would require Algorithmic Impact Assessments for high-risk systems, such as automated credit decisions. Such proposed auditing would require technical expertise within governments and some degree of disclosure of the source code and data. We foresee challenges in implementing such audits in many markets due to gaps in government resources and staffing, as well as challenges in creating standards for auditing that balance supervision with innovation and the proprietary nature of the codes.

One of the ways that governments are engaging is through test and learn partnerships with fintechs. For instance, in the US the Consumer Financial Protection Bureau (CFPB) has granted a no action letter to Upstart Networks, an online lender that uses alternative data. The letter essentially immunizes Upstart from regulatory action and in exchange, Upstart provides information about its loan applications, methodology for decision-making, and whether its loans expand financial inclusion. Other governments have also conducted test and learn engagements with fintechs, as well as create fora for enhanced private-public engagement such as the AI Forum in the Netherlands and Project Veritas in Singapore. Other emerging approaches include working across regulators, such as the Digital Markets Taskforce in the UK.

While there are some strong examples and initial work that could be shared, clearly, there is also still much to learn. What enhancements to existing legal frameworks are necessary to address the new or shifting risks in the use of AI in digital financial services? What digital public goods can help mitigate attendant risks? How can supervisors and regulators build capacity to understand data inputs and use cases, and develop principles and assessment methodologies that are regularly evaluated? These questions would best be answered through research and dialogue among and across regulators, as well as the private sector. Only through a concerted effort will the enormous potential of algorithms and AI extend the reach of financial services in a manner that is inclusive, safe, and fair for customers.

Join the Conversation

Stay informed. Subscribe to our newsletter.