No products in the cart.

To legislate, or not to legislate on AI? The UK government thinks it has the answer

To legislate, or not to legislate on AI? The UK government thinks it has the answer (

Debbie Heywood looks at the latest UK AI policy developments.

What’s the issue?

As the EU’s AI Act moves rapidly towards enactment, the UK is holding firm in adopting an alternative approach to regulating AI. In its March 2023 AI White Paper, the government proposed a sector-focused principles-based regime to regulate AI, rather than AI-specific legislation. Whereas the EU has chosen to introduce top-down legislation, the UK proposed that relevant regulators, including the ICO, FCA, CMA and MHRA, regulate AI within their competencies based on five AI principles:

  • safety, security and robustness 
  • appropriate transparency and explainability 
  • fairness 
  • accountability and governance 
  • contestability and redress. 

This approach was subject to a consultation.

What’s the development?

On 6 February 2024, the Department for Science Innovation and Technology published the UK government’s response to the consultation on its March 2023 AI White Paper.

No change in approach – yet

The government confirms that its overall strategy has not changed as a result of the consultation. For now, the intention is to rely on sector-based regulation informed by the five (unchanged) AI principles, rather than introduce AI-specific legislation. Relevant regulators are to publish an outline of their regulatory approach by 30 April 2024, supported by new government guidelines (some of which were published alongside the response) and the AI Standards Hub. £100 million has been allocated for new AI innovation and to enhance the capability of the regulators. The regulators’ outlines should include:

  • An outline of the steps they are taking in line with the expectations set out in the White Paper.
  • An analysis of AI-related risks in the sectors and activities they regulate and the actions they are planning to take to address these.
  • An explanation of their current capability to address AI as compared with their assessment of requirements and the actions they are taking to ensure they have the right structures and skills in place.
  • A one-year strategic plan.

While the government noted there was support for a central AI regulatory function in the responses, it does not propose a single, central regulator but instead points to steps it is already taking to help regulator coordination. It intends to set up a steering committee with government representatives and key regulators by spring 2024, and will support coordination through the Digital Regulatory Cooperation Forum. It has also established lead AI Ministers and will set up a new Inter-Ministerial group to help coordinate actions. 

A need to legislate in future?

The government does, however, make clear that legislation may be needed in future, notably in relation to the most advanced (or highly capable) general purpose AI. This is in keeping with messaging, particularly around the November 2023 AI Safety Summit, when calls for an international oversight body and some form of global consensus on AI regulation came to the fore. We are, however, some way from that point – there isn’t even agreement on terminology at the moment. 

Automated decision making

One area where the government does propose to legislate is in relation to automated decision making. However, this will be done through the Data Protection and Digital Information Bill where the government proposes to expand the lawful bases for processing personal data to reach solely automated decisions which have a legal or similarly significant effect on individuals. 

The intention is to replace the current Article 22 of the UK GDPR with new specific safeguards for automated decision making including information requirements and redress and review rights for individuals. Automated decision making which produces a legal or similarly significant effect will be prohibited where special data is being processed unless the lawful bases of consent, contractual necessity or being subject to a legal obligation can be relied upon where relevant. In the latter two cases, the processing must be in the substantial public interest. 

Other announcements 

The government set out a detailed plan of action for 2024 in the response and also announced: 

  • That the Centre for Data Ethics and Innovation will become the Responsible Technology Adoption Unit.
  • Various funding commitments, including the £100 million for new AI innovation and to enhance the capability of relevant regulators.
  • It will not ask the IPO to produce a voluntary code of practice on copyright and AI. 

On 13 February, the government also published an Introduction to AI assurance guide.

Meanwhile in the EU…..

The EU’s AI Act is nearing enactment following political agreement and adoption by the Council and European Parliament Committees. It is expected to be adopted in the European Parliament plenary on 10 or 11 April, following which it will be formally adopted and published in the Official Journal, coming into force 20 days later. A version of the consolidated text was leaked at the end of January and subsequently published by the European Commission. See here for more.

The European Commission announced an AI Pact – a voluntary scheme to foster early implementation of measures in the AI Act. It also published a set of Q&As at the end of 2023, following provisional political agreement of the AI Act. This outlines some of the highlights of the AI Act including its application, various risk categorisations and associated duties, the way the AI Act will be administered and enforced, and fundamental rights.

The European Commission launched an AI innovation package on 24 January 2024, to support AI startups and SMEs. The package includes: 

  • Adoption of a Decision to establish the European AI Office. The Decision took effect on 21 February. The AI Office will be created within the Commission to oversee the most advanced AI models, contribute to fostering standards and testing practices, and enforce EU-wide rules. The Office will be advised by a scientific panel of independent experts, in particular about GPAI systems, foundation models, and material safety risks. It is intended to become a central co-ordination body for AI policy at EU level. 
  • An amendment of the EuroHPC Regulation to set up AI factories 
  • An EU AI Start-Up and Innovation Communication which makes provision for financial support and initiatives to up-skill the EU’s talent pool, encourage investment in AI, accelerate the setting up of Common European Data Spaces to be made available to the AI community, and the GenAI4EU initiative to support the development of novel use cases. 
  • Two European Digital Infrastructure Consortiums with a number of Member States, to help develop a common European infrastructure in language technologies, and to help cities take advantage of AI tools. 
  • A Communication outlining the Commission’s own strategic approach to the use of AI.

And in the USA….

The US Department of Commerce set up the AI Safety Institute Consortium in early February to support the AI Intelligence Safety Institute housed under NIST. The Consortium brings together over 200 organisations including AI creators and users, government and industry researchers, academics and civil society organisations to develop guidelines and standards for AI measurement and policy. 

What does this mean for you?

The consultation response includes a detailed 2024 roadmap, but this is set out against the background of an upcoming general election, currently expected (but not certain) to take place in November 2024. There are already divisions emerging between the approaches of the Conservative and Labour Parties. At the AI Safety Summit, leading AI companies agreed to voluntary co-operation with governments on testing advanced AI models and, on 9 February 2024, the government published guidance on the AI Safety Institute’s (AISI) approach to evaluations and testing of advanced AI systems. Should the Labour Party win the next general election, however, it seems likely to pursue a more interventionist strategy. Labour recently said it plans to introduce a statutory regime which would replace the current voluntary arrangement. It proposes requiring firms to tell the government when they are developing AI systems over a specified capability level and to conduct safety testing with independent oversight. 

There is also a difference in approach between the strategy outlined in the consultation response and the House of Lords Communications and Digital Committee which published its report on ‘Large language models and generative AI’ on 2 February 2024. The report concludes that the government’s strategy focuses too much on AI safety and not enough on up-skilling, commercial opportunity and technical skills. It says the UK needs to rebalance towards boosting opportunities or risk losing influence and therefore becoming too dependent on overseas tech firms. It is also critical of the government’s stance on copyright and generative AI and urges it to produce clear guidance, if not legislation, to protect rights holders.

The lack of consensus in the UK is merely a reflection of the lack of global consensus. As many have pointed out, regulation of AI is unlikely to be effective, at least in terms of the most serious safety concerns, if it is piecemeal. While businesses in the UK may be relieved to hear they will not need to comply with complex new AI legislation, their use of AI will be governed, potentially, by a wide range of laws which will need to be taken into account. And, the multi-nationals will have to deal with a complex regulatory landscape which is unlikely to be future-proofed given the pace of advancement in AI.

Perhaps this adds weight to the ‘watching brief’ approach to legislating adopted by the current government. Time will tell whether using existing law and regulators will be more effective in managing AI risks without stifling innovation, than, say, the EU’s more prescriptive approach.