AI in the UK: How ready, willing and able are we?

A panel from the House of Lords has recently released its report on AI titled, AI in the UK: ready, willing and able?. You can read an overview of the report, and the report itself here.

First, let’s set the scene. The Chairman of the Committee, Lord Clement-Jones, said:

“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.

“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.

“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.

“We want to make sure that this country remains a cutting-edge place to research and develop this exciting technology. However, start-ups can struggle to scale up on their own. Our recommendations for a growth fund for SMEs and changes to the immigration system will help to do this.

“We’ve asked whether the UK is ready willing and able to take advantage of AI. With our recommendations, it will be.”

The report is long - 181 pages - and looks at the development of AI in the UK, as well as the connected ethics. It recommends that the government should support businesses in the UK, to allow our nation to become a global leader in the development of AI. How does it say we should do this? First, by cautioning against allowing a small number of tech giants to monopolise development. Second, by encouraging a greater personal data control, AI transparency, the introduction of AI into the educational curriculum, targeted use of AI within the public domain (in particular by the NHS), the investigation of potential issues relating to AI and liability laws, and a coordinated approach to government AI policy in the UK. Finally, it states, clearly, that there is no need for an AI regulator.

Rather, the report suggests an ethical framework to guide the development and application of AI. Each point of the framework considers the social impact of AI on the public. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.

  2. Artificial intelligence should operate on principles of intelligibility and fairness.

  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

This is all welcome news, but as we discussed in our recent article surrounding the readiness of IP laws for AI, there will be a need for the UK legal system to adapt, in order to regulate the risks of AI. The report recommends that the Competition and Markets Authority review “the use and potential monopolisation of data by big technology companies operating in the UK”. It encourages the establishment of a fair marketplace for AI products, services and use of data. The report also recommends that the Law Commission investigate whether existing liability law will be sufficient.

We can only wait to see how this report will be received and acted upon. Over the remainder of 2018 we will be following this and the wider legal ramifications of this rapidly developing area of technology.