AI Bill – Regulatory capability

Written by Lord Holmes of Richmond MBE, House of Lords Science and Technology Select Committee

I have drafted and introduced to Parliament a law to regulate AI – the Artificial Intelligence (Regulation) Bill. The Government’s current approach, light touch regulation in an effort not to dampen innovation, is well intentioned but wrong. Right sized regulation will support, not stifle, innovation and is essential for embedding the ethical principles and practical steps that will ensure AI development flourishes in a way that benefits us all – citizens and state. I have already written about the need for leadership and a focus on ethical AI. We also need to address regulatory capacity and construct an agile but comprehensive regulatory framework.

Government position

Currently, the UK approach to AI regulation rests on five principles, modelled loosely on those published by the OECD:

1. Safety, security and robustness

2. Appropriate transparency and explainability

3. Fairness

4. Accountability and governance

5. Contestability and redress

The Government intends for these principles to be interpreted and acted on by existing regulators – such as the Financial Conduct Authority in the finance sector, and the Medicines and Healthcare products Regulatory Agency in the pharmaceutical sector – to ‘guide and inform the responsible development and use of AI in all sectors of the economy’. They are effectively instructions to regulators about what outcomes they should be aiming for when AI is deployed in the areas for which they are responsible. The principles will not – initially – be placed on a statutory footing, and so regulators will have no legal obligation to take them into account, although the Government has said it will consider introducing legislation in the future.

Why legislate now?

The current situation leaves significant gaps in the legal framework for governing AI. Legal analysis from the Ada Lovelace Institute identified several issues:

  • Highly inconsistent set of powers across regulators to monitor, investigate and enforce the principles within their domains – for example, many regulators do not have appropriate information gathering powers that permit them to interrogate the existence and functionality of algorithms, AI models, and underlying data use by regulated entities.
  • Absence of regulators to enforce the principles in domains such as recruitment and employment, or diffusely regulated areas of public service delivery like policing or benefits and tax administration.
  • Absence of developer-focused obligations that would meaningfully incentivise larger companies developing powerful ‘foundation models’ like GPT-4 to adhere to transparency and safety measures. The Government has acknowledged the risks from these models in its own research, stating ‘there may not be sufficient economic incentives to develop advanced AI with sufficient guardrails in place, and adequate safety standards have not yet been established for these potential future risks’. Regulators will also need powers to address AI harms that flow from these at their source, rather than ‘downstream’ at the point of use.
  • Absence and high variability of meaningful recourse mechanisms when things go wrong – for example, the inability of ordinary people to secure enough information about automated decisions made about them to meaningfully challenge them under existing laws.

The AI authority

My Bill proposes the creation of an ‘AI Authority’ to spot these gaps and ensure that relevant regulators take account of AI. A full list of proposed responsibility is set out in Clause 1 of the Bill:

(1) The Secretary of State must by regulations make provision to create a body called the AI Authority.

(2) The functions of the AI Authority are to—

(a) ensure that relevant regulators take account of AI;

(b) ensure alignment of approach across relevant regulators in respect of AI;

(c) undertake a gap analysis of regulatory responsibilities in respect of AI;

(d) coordinate a review of relevant legislation, including product safety, privacy and consumer protection, to assess its suitability to address the challenges and opportunities presented by AI;

(e) monitor and evaluate the overall regulatory framework’s effectiveness and the implementation of the principles in section 2, including the extent to which they support innovation;

(f) assess and monitor risks across the economy arising from AI;

(g) conduct horizon-scanning, including by consulting the AI industry, to inform a coherent response to emerging AI technology trends;

(h) support testbeds and sandbox initiatives (see section 3) to help AI innovators get new technologies to market;

(i) accredit independent AI auditors (see section 5(1)(a)(iv));

(j) provide education and awareness to give clarity to businesses and to empower individuals to express views as part of the iteration of the framework;

(k) promote interoperability with international regulatory frameworks

(3) The Secretary of State may by regulations amend the functions in subsection (2), and may dissolve the AI Authority, following consultation with such persons as he or she considers appropriate.

Who regulates what?

During the second reading debate on the Bill my colleague Lord Young of Cookham drew the Minister’s attention to Clause 1(2)(c), which states that the function of the AI authority is to,

“undertake a gap analysis of regulatory responsibilities in respect of AI”.

The Government’s White Paper and the consultation outcome have numerous references to regulators but without naming them. There is no list of all regulators, or relevant regulators, or reference to what they regulate in a way that could be applied to responsibilities with regard to regulating AI.

Lord Young went on to make the point that this lack of clarity can lead to confusion. Using the education sector as an illustration he pointed out

We have a shortage of teachers in many disciplines, and many complain about paperwork and are thinking of leaving. There is a huge contribution to be made by AI. But who is in charge? If you put the question into Google, it says, “the DFE is responsible for children’s services and education”. Then there is Ofsted, which inspects schools; there is Ofqual, which deals with exams; and then there is the Office for Students. The Russell group of universities have signed up to a set of principles ensuring that pupils would be taught to become AI literate.

Who is looking at the huge volume of material which AI companies are drowning schools and teachers with, as new and more accessible chatbots are developed? Who is looking at AI for marking homework? What about AI for adaptive testing? Who is looking at AI being used for home tuition, as increasingly used by parents? Who is looking at AI for marking papers? As my noble friend said, what happens if they get it wrong?

Lord Young of Cookham, House of Lords, 22 March 2024

Lord Young followed up this powerful intervention to ask the Government whether they would publish the regulators referred to in A pro-innovation approach to AI regulation.

A wide range but not specific

The Minister responded this week with this statement:

Given the cross-cutting nature of AI, our regulatory approach is relevant to a wide range of regulators, and as such the White Paper and government response did not refer to specific regulators. We encourage all regulators to consider how our AI regulatory principles may be applied within their remits and have published guidance to support them with this.

We have published the letters that the Secretary of State wrote jointly with cabinet colleagues to a number of regulators impacted by AI, asking them to publish an update on their strategic approach to AI by 30th April.

Viscount Camrose

The Government has published some non-statutory principles and has asked all regulators to think about how AI fits into their remit. It is asking a lot of our regulators and giving them very little in the way of what they need to grapple with this huge technological advance we are facing. The Ada Lovelace Report is clear about the scale of the challenge and makes several recommendations around regulatory capability stressing that regulating AI is resource-intensive and highly technical. Regulators, civil society organisations and other actors need new capabilities to properly carry out their duties.

Conclusion

This is why I have drafted my AI Regulation Bill. To pro-actively, engage fellow Parliamentarians and the Government with the ideas, and the concrete steps – such as a targeted, well resourced, coordinating regulatory body such as the AI Authority – that we need to take to ensure we shape AI positively for the public’s benefit and lead the international community in AI’s ethical development.


Originally posted here

Read More AI & Data

Comments are closed.