Explaining The Inexplicable? Explaining Decisions Using Artificial Intelligence (Machine-Learning)

Monday, 22 June 2020
By Bob McDowall


This article reflects on the impact of the published guidance by the UK Information Commissioner (ICO) and the Alan Turing Institute on explaining decisions made with Artificial Intelligence (AI, perhaps better termed 'machine-learning'). The guidance provides enterprises with practical advice to help explain the processes and services that contribute to AI, and seeks to improve the accountability and transparency of AI decisions. It is focused primarily on the public sector but has significant implications for the private sector.

Summary Of The Recommendations

The guidance recognises the diverse nature of AI and the ability of AI to transform the power of computing to interpret the world. It covers its ability to analyse and predict human behaviour across many different areas, including health, logistics and crime.

The guidance is designed to help enterprises understand the challenges and risks of “unexplainable decisions”, it offers enterprises advice on how to achieve “explicability” and demonstrate best practice. The published guidance seeks to explain AI, citing examples of AI in practice and its impact on enterprises.

Eight recommendations are made to government, national bodies and existing regulators:

Ethical Principles & Guidance: The Public needs to understand the high-level ethical principles that govern the use of AI in the public sector. Guidance on use of AI in the public sector should be easier to use and understand and promoted extensively. Unfortunately, the guidance notes that there are currently three different sets of ethical principles intended to guide the use of AI in the public sector – the SUM Values and FAST Track Principles (see figure 1), the OECD AI Principles, and the UK Data Ethics Framework. All of these principles evolved separately and it is currently unclear how they work together or which principles public bodies should follow.

Figure 1

fig 1.png

Articulate A Clear Legal Basis For AI: This should be done before delivery of AI. However, the guidance notes that a strong and coherent regulatory framework for AI in the UK public sector is still a work in progress, with some areas, such as healthcare, more advanced in thinking than others, such as the police and legal service.

AI Must Comply With Data Bias And Anti-Discrimination Law: AI systems are “trained” using data, and are only as good as that data and algorithmic models they are constructed from. Data or algorithms may contain implicit observer-mediated race, gender, or ideological biases. AI systems constructed from flawed algorithms or trained using flawed data may result in discrimination or bias. Recognition of this risk is the first step in ensuring that AI systems are compliant with Anti-Discrimination Law.

Regulatory Assurance Body: While the guidance does NOT recommend creation of an AI Regulator, the guidance recommends establishment of a regulatory assurance body identifies gaps in the regulatory landscape, whose role is to provide advice to individual regulators and governments on issues of AI

Procurement Roles And Processes: Ethical requirements to meet public standards should be expressly written into public tenders and contractual arrangements

Crown Commercial Service’ S Digital Marketplace: The Crown Commercial Services should introduce practical tools into its AI framework to help public bodies and private sector enterprises delivering services to the public sector.

Impact Assessment: The Government should consider how the requirement for mandatory AI impact assessments can be integrated into existing processes to evaluate potential effects of AI on public standards. Such evaluations should be published. Mandatory impact assessments of AI should be conducted on the feasibility of their integration into existing processes to identify potential effects of AI on public standards. The impact assessment should be published.

Transparency And Disclosure: Government should establish guidelines for declaration and disclosures about the AI systems which they deploy.

Seven recommendations have also been made to front line providers of public services to help establish effective risk based governance for the use of AI, normal practice and procedure measures with exception of appeal and redress recommendation:

  1. Risk to public standards must be evaluated
  2. Diversity in all its dynamics, behaviour, background and “points of view” must be taken into account
  3. Responsibility for building and operating of AI must be agreed, allocated and documented
  4. The AI systems must continually monitored and evaluated
  5. Oversight mechanisms for services provided by both public and private sectors
  6. Training and education
  7. Appeal and redress - this measure proposes that both public sector and private providers of public services must inform citizens of their right and method of appeal against automated and AI assisted decisions


The recommendations, do not propose the establishment of an AI Regulator and note that regulation can stifle innovation, especially in the creative sector of AI. Hopefully, any scandals and investigations into any future abuse of AI does not cause a reversal of maintaining a guidance rather than regulatory approach.


The guidance encompasses a very broad definition of AI to describe everything from “routine data analysis to complex neural networks.” While the scope of the recommendations, are sufficiently broad brush to embrace this diversity, future developments in the field of AI will almost certainly require more targeted guidance segmenting according to type. Segmentation of AI will help maintain a guidance rather than regulatory approach to oversight of developments.

The guidance recommendations are targeted at the Public sector, where AI decisions can have wide and deep impact on social and political decision making. These decisions, derived from AI-generated analysis, have the potential to impact the allocation of public finance as well as presenting political risk.

The recommendations should, and almost certainly will, be adopted by the private sector. Covid -19 is likely to bring greater oversight of the private sector by Government, as Government is likely to provide direct and indirect funding for AI development, as it seeks to develop national capabilities in what is generally agreed to be an area of rapid development during the next 5 years. Likewise, institutional investors in the private sector will being scrutinising the ethical aspects of AI developments, and will want to set some benchmarks .The guidance recommendations will provide a starting point for this.

Any future changes in Standards In Public life will feed through to guidance, and the oversight and deployments of AI. The flexibility of AI systems, and their ability to be adjusted to meet changing standards remains to be seen, but should certainly be taken into account at the specification stage. This would be greatly helped through the segmentation of future guidance by type of AI.

Jurisdictional issues receive little attention in the guidance. However, jurisdictional issues can be problematic especially with respect to the GDPR. In the private sector AI may be built in one jurisdiction, operated in another jurisdiction and the data results transmitted to other jurisdictions. By implication such issues will have to be addressed during the procurement, build, operational, risk, oversight processes, while oversight of jurisdictional issues is managed by regulatory assurance bodies.


The guideline recommendations are helpful. They mercifully fall short of establishing a dedicated AI regulatory body but, I fear, a regulatory body could be established if any public scandal emerges with AI rooted in its source. Unfortunately, the guidelines will add to the complexity and costs of AI developments and small enterprises may have to partner with larger enterprises to bring their AI Innovations to market.