Bringing non-discrimination to life: Human rights and machine learning

The need for a machine learning framework

There is no doubt that ML is unlocking a new world of value for organisations around the world. But with it, we are faced with new challenges, arising from complexity, opaqueness, ubiquity and exclusiveness. These, along with algorithm design and use of training data, can result in unexpected discriminatory outcomes.

This is particularly problematic where businesses rely on ML for their decision-making, undermining human rights. Practical, industry-wide approaches to overcoming human rights technology challenges are far and few between, with regulators struggling to keep up with new advances. This paper, by the World Economic Forum, makes a call to action to business leaders across the globe to uplift their organisation’s actions, analysis and self-reporting in the development and use of ML to ensure discriminatory outcomes do not occur and human rights are protected.

Aim

Machine learning (ML) is defined here as a model “that leverages computer programs that automatically improve with experience and more data.” This white paper proposes a framework to help leaders and organisations understand and manage potential risks that may arise from ML applications. It makes the case that businesses need to take “ongoing, proactive and reactive steps” in their use of ML systems in order to prevent discriminatory outcomes and thus uphold human rights.

Method

This paper was developed from discussions among members of the World Economic Forum Global Future Council on Human Rights and experts in the fields of human rights and machine learning. The conceptual approach was based on the rights outlined in the Universal Declaration of Human Rights as well as other international treaties that outline legal standards relating to human rights and discrimination.

The framework

This paper proposes four guiding principles and a three-step approach for leaders and organisations to adopt to prevent discriminatory outcomes occurring in ML.

Guiding principles

1. Active Inclusion ¬– ensuring that teams developing ML applications are diverse in composition and include multiple inputs from populations that may be affected by outputs of the system.

2. Fairness – defining ‘fairness’ for the context and making sure that questions of bias, advantages and disadvantages on various groups of people are taken into account, throughout the lifecycle of the system.

3. Right to Understanding – identifying where ML plays a role in decision-making that will affect individual’s rights, ensuring that it is then disclosed, that the process explained to end users and is reviewed by a human authority.

4. Access to Remedy – establishing visible and accessible measures for redress in the case of discriminatory outputs of ML.

Recommended approach

Step 1: Identify ML risks – there is a responsibility for all organisations to map out and track against human rights risks in ML design and implementation throughout the life cycle of its use. This is particularly relevant in selecting data sets which may have captured bias towards certain populations.

Step 2: Take effective preventative action – people working and developing ML should be encouraged to have “ongoing human-in the-loop checks” to prevent and mitigate discriminatory outputs. Effective governance need to be in place to ensure that positive human rights outcomes are prioritised by the development team.

Step 3: Be transparent about efforts – organisations should aim to be explicit and open about the design, workings and impacts of ML. This comes to life through mechanisms such as internal auditing and open reporting.

Implications

For leaders, gaining a comprehensive understanding of the human rights risks involved in emerging technologies is critical. With regulators struggling to keep up, addressing current human rights issues and preventing future incidents is a responsibility that the WEF states “exists over and above compliance with national laws and regulations.” Businesses have a responsibility to communities to make sure that they are doing everything in their power to prevent human rights abuses occurring.

An important factor for leaders to take into account is understanding how discriminatory outcomes may occur across the stages of design, development and data input. While biases may be unintentional, this does not lessen the effects of the outcomes. Leaders should ensure that their teams are trained and undergo due diligence and consultation processes at the appropriate stages in ML development and use.

This is a role that needs to be faced head on and with openness and transparency, not only within an organisation but also to groups that may be affected. This has been acknowledged by the work currently underway by tech firms Google, IBM, Facebook and Microsoft (to name a few) who are developing anti-bias tools. However, the human rights implications for ML are present across almost every sector – from government and healthcare to financial institutions and not for profits.

This paper gives a clear, industry-agnostic three step roadmap for leaders to put into action in order to begin to tackle their role in this new ethics issue. With businesses’ ML programs affecting the lives of millions of people across the globe, this is truly a situation where with great power comes great responsibility.

For more information, contact Hilary Binks.

To read the full article, click here.


Want to stay up-to-date?

Stay on trend and in the know when you sign up for our latest content

Subscribe