What is the UK AI Regulation White Paper?
The UK’s AI sector is thriving, employing over 50,000 people and contributing £3.7 billion to the economy. UK universities produce top AI research and talent, and the country ranks third globally for AI unicorns and start-ups.
The UK’s goal is to be a leading place to develop and use AI that positively impacts lives. In March 2023, the UK government released the AI Regulation White Paper, outlining a proposed regulatory framework for AI.
Unlike the EU’s AI Act (see below for further details), which creates specific compliance obligations for various AI actors, the UK’s approach is principles-based, allowing existing regulators to apply principles within their sectors. The Secretary of State for Science, Innovation and Technology also included proposals for a central function within government to conduct a range of activities such as risk assessment and regulatory coordination to support the adaptability and coherence of their approach.
Following the White Paper’s release, the government conducted a consultation, receiving over 400 written responses and engaging with more than 300 participants in roundtables and workshops. The first international AI Safety Summit was also held at Bletchley Park in November 2023.
After considering the feedback, the government published its response to the consultation on 6 February 2024 (Response). Key points include a commitment to a “proportionate, context-based approach” to AI regulation, emphasising five cross-sectoral principles:
- Safety, security, and robustness
- Transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
The Response also considered expanding these principles to include human rights, data quality, international alignment, systemic risks, sustainability, and education. It acknowledged that existing legal frameworks might not adequately distribute legal responsibility across the AI lifecycle. The government will refine the AI regulation framework and consider measures to ensure accountability and fair responsibility allocation.
Regarding highly capable general-purpose AI systems, the Response does not introduce or propose new laws or regulations, but anticipates to introduce legislation if AI capabilities grow exponentially and the industry’s voluntary measures are inadequate. The government distinguishes between highly capable general-purpose AI, highly capable narrow AI and agentic AI, recognising the challenge in regulating the most powerful systems. An update on new responsibilities for developers of these AI systems is expected later this year.
The government has established a central function to monitor AI risks, support regulator coordination, and address regulatory gaps, supported by a new steering committee. Targeted consultations on an AI risk register will continue.
The Digital Regulation Cooperation Forum launched a pilot AI and Digital Hub, bringing together four key regulators: the Competition and Markets Authority (CMA), the Intellectual Property Office (IPO), the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA).
On copyright and AI, the UK IPO working group failed to develop a voluntary code of practice, leading the government to take charge. Issues around the use of copyrighted materials in AI models remain unresolved.
Next Steps
The Response outlines the UK’s 2024 roadmap: continuing to develop domestic AI regulation policy, evaluating the efficacy of the regime, promoting AI opportunities while addressing risks, and supporting international AI governance collaboration. Frequent updates from the government and regulators are expected throughout the year.
What is the EU AI Act?
In March 2024, the European Parliament approved the Artificial Intelligence Act, a significant regulatory framework designed to ensure AI technologies are safe, respect fundamental rights, and foster innovation.
On 21 May 2024, the AI Act passed its final legislative hurdle, approved by the Council of the EU. Once published in the official Journal, the Act will enter into force 20 days later, with full applicability 24 months after that.
The AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI applications while promoting innovation and establishing Europe as a leader in AI. The regulation imposes obligations on AI systems based on their potential risks and impacts.
Scope of the AI Act
The AI Act applies directly to businesses in the 27 EU Member States and to non-EU businesses, including those in the UK, with customers in the EU. It also covers businesses in Norway, Iceland, and Liechtenstein under the European Economic Area (EEA) arrangements. Importantly, any business operating in the EEA or whose AI outputs are used in the EEA will fall under the AI Act’s jurisdiction.
Applicability and compliance
The AI Act isn’t limited to those who build AI systems but extends to those involved in their operation, distribution or use, including providers, importers, distributors and deployers/users.
Banned applications
The AI Act bans certain AI applications that pose unacceptable risks to health, safety or fundamental rights, such as:
- AI exploiting vulnerabilities (e.g., age, disability) to distort behaviour, causing harm.
- Social scoring based on behaviour or personal characteristics leading to unjustified detrimental treatment.
- Emotion-inference systems used in workplaces or educational settings (unless for medical/safety reasons).
- Biometric categorisation inferring sensitive characteristics (e.g., race, political views) with exceptions for law enforcement.
- AI predicting criminal offence likelihood based solely on profiling.
- AI systems creating or expanding facial-recognition databases through untargeted web-scraping or CCTV footage.
- Real-time remote facial recognition in public spaces for law enforcement (with exceptions).
- AI systems using subliminal or deceptive techniques causing significant harm.
General purpose AI
This covers AI models performing core functions (e.g., generating text, speech, images) integrated into various AI systems. Providers must meet transparency obligations regarding training data and copyright, supporting downstream compliance. For AI models designated as having “systemic risk,” additional obligations include model evaluation, risk assessment, incident reporting, cybersecurity, and energy consumption monitoring.
Obligations for high-risk systems
High-risk AI includes systems integrated into products subject to EU safety regulations and those specifically classified as high-risk. Examples include biometric identification, emotion recognition (outside certain settings), safety components in infrastructure, educational/vocational systems, workplace systems, credit checks, insurance risk assessments, emergency service dispatch, and public sector applications. Systems that don’t pose significant risks must self-assess and document compliance. Obligations include risk management, documentation, transparency, oversight, accuracy, robustness, and data governance.
Transparency requirements
Users must be aware they are interacting with an AI system. This applies to chatbots, emotion recognition systems, biometric categorisation, and systems generating synthetic content. Artificial or manipulated content, such as “deepfakes,” must be clearly labelled.
Penalties
Non-compliance can result in eye-watering fines ranging from €15 million to €35 million or 3% to 7% of worldwide group turnover, whichever is greater.