Accountability in AI: Time is now

Artificial Intelligence (AI) is an emerging technology increasingly used by organisations today. Yet for all its promised benefits, artificial intelligence is widely reported to be facing a significant challenge: bias.

A recent example is Amazon’s facial recognition tool Rekognition; designed to provide “real-time face recognition across tens of millions of faces”. Rekognition received backlash in 2018, after failing a test conducted by the American Civil Liberties Union, where it identified several members of the United States Congress as having a criminal conviction.

Furthermore, the tool had a significantly higher error rate between diverse demographics. Increased accuracy occurred when detecting males and light complexions, yet decreased accuracy with females and darker complexions. This is only one of many recent examples that demonstrate that even seemingly objective technologies can be biased. This has serious implications for their build and real-world application.

Aim

To unpack contemporary issues related to AI and bias, The AI Now Institute has published a 2018 report discussing issues related to accountability for harmful consequences of AI.

Method

The AI Now Institute at New York University is a research arm dedicated to understanding the social implications of AI technologies. The founders of the Institute wrote this edition in conjunction with post-doctoral researchers, engaging a deep and holistic array of stakeholders to understand and respond to challenges stemming from rapid growth in such unchartered territory.

Findings from report:

The key findings of report spanned a number of issues with implications for business, politics and society. Four key findings related to bias in AI and business are listed below (For a detailed overview, refer to the full report):

1. The failing fast mindset: Emerging technology tends to adopt a “fail fast, fail often” approach, placing pressure on quick development and early releases. Consequently, iterating and delivering may take priority over investment in governance, quality assurance and risk mitigation. This could result in overlooking of disparate impacts that may affect disadvantaged groups of users.

2. The black box issue: Due to the competitive nature of the AI market, stringent confidentiality and trade secrecy barriers create a “black box” effect that significantly impedes on its ability to be transparent to users and purchasers. Consequently, accountability becomes more difficult to define and user’s power to question is limited.

3. A stark culture divide: A significant cultural gap exists between the concentrated technical communities who play a role in developing AI, and the diverse population – usually external to the technical community – of whom it typically affects. Operational structures and required expertise significantly impede the non-technical group’s capacity to be involved, resulting in strong power asymmetries between technical and non-technical communities.

4. The formula for fairness: Much of what happens in AI is based on formulas and algorithms. However, how bias and fairness are “calculated” is not studied at the depth required. It often acknowledges mathematical factors, rather than core issues prompting structural change critical to combatting bias. This poses the risk of only ever making surface level progression towards fairness.

Implications and key takeaways for organisations:

All AI stakeholders are encouraged to deeply think about how to harness the best facets of emerging technologies, without damaging the inspiring progressions made in areas such as diversity, inclusion and bias. On this, the report suggested the following relevant recommendations for organisations:

1. Ensure employees are provided with structural and moral support for ethical concerns around technology they are creating, using or selling. That is, creating a culture where employees feel safe to speak up, where teams have full transparency of their work and can safely withhold from contributing to work where they have ethical concerns.

2. Look deeper than bias in just recruitment and ensure that practices preventing bias and discrimination are upheld across the entire employee lifecycle (e.g. reward, promotions) and not just in the context of selection.

3. Build a rigorous decision making process invest time and diverse resources when selecting AI technology, such as an Automated Decision System, as well as its vendor research and relationship governance to will ensure accountability for both vendors and decision makers.

4. Ensure efforts to achieve “fairness” include diverse contribution from technical and non-technical communities, asking “fair to whom? And in what context?” to avoid the influence of implicit assumptions that would hinder ultimate fairness.

5. Examine how bias might be embedded within data or systems along the AI product lifecycle, which could ultimately taint a product.

Ultimately, the digital era is upon us, and the increasing interest in bias mitigation is an indication that business understanding in this topic is maturing holistically. As these opportunities continue to grow, we must not overlook the unwavering need to invest in accountability measures to create the best environment for success – of both AI products and their affected audiences.

For more information about this article, contact Rachael Salamito.


Want to stay up-to-date?

Stay on trend and in the know when you sign up for our latest content

Subscribe