Emerging technologies bring huge potential, but also possible downsides. The ethics of how organisations design and implement these technologies must be considered side-by-side with their transformative possibilities. Both non-linear organisations, and more traditional organisations who have adopted emerging technologies are finding their way through some ethical grey areas. From concerns over the correct use of data and protection of privacy, to the perceived threat to democracy from misuse of targeted content, there is a significant obligation on the organisations both developing and implementing these technologies to consider the wider implications of their misuse or misapplication. Algorithms reflect the biases of datasets and human data workers (though they are often less biased and more accurate than the humans they are replacing). They perceive the world as represented through their datasets, and take on the implicit values of the humans making design decisions and collecting, selecting and applying data. In addition to pre-existing biases, biases can also arise from technical constraints (e.g. failure under certain conditions, limitations of tools in formalising human constructs such as judgment or intuition) or emerge from new contexts of use (e.g. unintended audiences, unexpected correlations between datasets, self-reinforcing feedback loops). The problem of bias and bad data The problem of bias in machine learning has become more widespread as the technology is adopted by more industries and is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Other algorithms may already be subtly distorting the kinds of medical care a person receives, or how they get treated in the criminal justice system. Those on the other side of the argument, contend that regardless of the bias inherent in algorithms, it will always be much less than the inherent bias of humans, who have been proven to make poorer decisions than algorithms, repeatedly. However the impact of getting it wrong when it comes to algorithms and machine learning, as we’ve learned, are exponential. And isn’t the point of progress that we should be doing these things not just a bit better, but multiples better – exponentially better? The question is whether the net effect of machine learning-enabled systems is to make the world fairer and more efficient or to amplify human biases to superhuman scale. Help (and regulation) is on the way Companies are beginning to taking the need to get this technology right seriously. Google and IBM both announced new toolkits for mitigating bias in AI in September 2018. These join similar recent launches from Facebook and others as part of a set of technology-oriented efforts that complement the ongoing work on AI ethics and risk assessment frameworks. And the regulation is coming from government too. The European Commission will present ethical guidelines on AI development, based on the EU’s Charter of Fundamental Rights before the end of 2018. In the UK in April, the House of Lords select committee on AI urged the UK government and industry to focus on ethics. It recommended that ethics should be put at the centre of artificial intelligence adoption to ensure it’s developed for the common good and benefit of humanity. What can organisations do? Companies can manage ethical considerations, and specifically algorithmic risk by developing and adopting new approaches that build upon the lessons learned from ‘conventional enterprise risk management, e.g.: Elevate responsibility for risk management around emerging technologies from the server room into the board room. Executives need to be across all potential risks from the beginning Like cyber security, emerging technologies are primarily an engine for growth. A solid risk foundation is crucial to ensuring that growth potential can be reached Develop an algorithmic risk strategy and governance structure, including policies, risk assessments, training, compliance, and complaint procedures Prepare a strong inventory of key algorithms that have been tested and “risk-rated” to enable focus on algorithms that pose the greatest risk and potential impact Develop processes and approaches, aligned with the governance structure, to address the entire algorithm lifecycle – e.g. handling of unexpected boundary conditions, implementing hard-coded rules to prevent extreme negative outcomes, protection against adversarial input from malicious actors seeking to skew models Establish ongoing monitoring of data and algorithm results – testing data inputs, workings, and outputs, monitoring results, and seeking independent reviews of algorithms. Ultimately, when data and technology culture meets customer culture it means that we must embed controls into the design of new initiatives to ensure that the outcomes we produce are consistent with customer experience goals. More information on emerging technologies and the ethics that govern their implementation is available in our report: Transforming in the exponential era.