Artificial Intelligence
10 minutes

AI Regulation in the UK, EU and US

The regulation of AI presents diverse approaches in the UK, EU, and US. While there are common themes of safety, security, transparency, and fairness, the execution of these principles varies significantly.


The EU's comprehensive model, the US's evolving sector-specific approach, and the UK's principles-based, regulator-led framework each offer unique perspectives on how to balance the rapid advancement of AI technology with ethical considerations and public trust. As AI continues to develop, these regulatory landscapes will play a crucial role in shaping its future. This article looks at the approach from the EU, US and the UK as well as other countries around the world.

European Union: The AI Act

The European Union has established a comprehensive legislative framework for AI with the AI Act to limit certain AI use cases and improve transparency about data use. This Act classifies AI systems based on risk levels, applying stricter controls on high-risk applications, particularly in sensitive areas like healthcare, transportation, and education​​.

Key Features:

  • Risk-Based Categorisation: AI applications are divided into unacceptable, high, limited and low-to-minimal risk categories, with corresponding regulatory requirements​​.
  • Transparency and Bias Mitigation: The Act mandates increased transparency in AI development and holds companies accountable for any harm resulting from high-risk AI systems​​.
  • Global Influence: The AI Act could set a benchmark for global AI policy, influencing AI development and usage worldwide​​

The EU's approach includes setting up a new European AI Office to coordinate compliance, implementation, and enforcement of the AI Act. This body will be the first globally to enforce binding rules on AI, marking a significant step in centralised AI governance.

United Kingdom: Principles-Based Framework

The UK's regulatory approach is principles-based, emphasising flexibility, innovation, safety, and public trust.

Key Features:

  • Six Core Principles: The UK's framework is built on principles like safety, technical security, transparency, fairness, accountability, and redress or contestability​​.
  • Regulator-Led Implementation: Unlike the EU's central regulatory body, the UK's approach allows different sector-specific regulators to interpret and implement AI principles​​.
  • Innovation and Economic Growth: The UK's regulation approach aims to foster a pro-innovation environment while ensuring AI technologies are safe and reliable​​.
United States: Evolving AI Regulations

The US approach to AI regulation is currently evolving, characterised by sector-specific guidelines and a focus on responsible AI development.

Key Features:

  • Executive Orders and AI Bill of Rights: The US has issued an Executive Order and an AI Bill of Rights, emphasising civil rights, privacy, data protection, and trustworthy AI development​​.
  • Sector-Specific Approach: US regulations are more industry-friendly, focusing on specific sectors rather than a comprehensive model​​.
  • Ethical and Responsible AI Development: The US emphasises investment in responsible AI practices, including security, privacy, human rights, transparency, fairness, and inclusion​​.

A framework grading types and use of AI by risk posed (similar to EU) may be introduced.  Some are also calling for a state-centric approach rather than just focussing on the federal level.

Overlaps and Differences


  • Emphasis on Safety and Security: All three regions stress the importance of AI safety and security.
  • Transparency and Fairness: There is a shared focus on transparency and fairness in AI systems.
  • Accountability and Redress: The UK, EU, and US recognise the need for clear accountability and mechanisms for redress in AI systems.


  • Regulatory Scope and Approach

EU: The AI Act applies uniform regulations across all member states, creating a centralised and comprehensive framework for AI regulation.

UK: A consistent set of overarching principles is applied across different sectors, with sector-specific regulators interpreting and implementing these principles, leading to a coordinated but flexible regulatory approach.

US: Various federal agencies regulate AI independently in their respective domains, and there is no unifying set of AI-specific principles that spans across these sectors, resulting in a more fragmented and sector-specific regulatory environment.

  • Centralized vs. Decentralized Regulation: 

EU: Features centralised AI regulation with its European AI Office, overseeing AI governance uniformly across member states.

UK: Adopts a decentralised approach, using sector-specific regulators for AI oversight, ensuring sector-tailored regulation and a coordinated application of overarching principles.

US: Also adopts a decentralised approach, but relies on various federal agencies to govern AI within their specific sectors, leading to a more fragmented and varied regulatory environment without a unified set of AI-specific principles.

  • Global Influence: The EU's AI Act is likely to have a more significant global influence, setting standards that may be adopted worldwide.

Other countries
  • Africa : National AI strategies have been drafted by some countries (South Africa, Nigeria, Rwanda) and the African Union is planning to release an AI strategy for the continent in 2024.
  • Australia: Focuses on voluntary AI regulation (the AI Ethics Framework), with a review underway for a more comprehensive AI governance and regulatory framework.  The Government are seeking public feedback. 
  • Mainland China: An Artificial Intelligence Law is on the legislative agenda and may be released in 2024.  Currently pieces of legislation are released as new AI products emerge, enabling a quick response.  A national AI office has also been proposed. The Cybersecurity Administration of China (CAC) proposes rules and restrictions for companies developing generative AI products - eg the need to register foundation models before releasing them.  Algorithm governance rules require algorithms to avoid discrimination in algorithm training, generate content to reflect “core socialist values” and need to undergo a security assessment by CAC.
  • Taiwan: A private foundation has proposed a draft law outlining AI development principles. 
  • Singapore: Encourages AI innovation in finance through best practice principles. AI Verify, a testing framework, assists in the evaluation of AI systems against international AI ethics principles. 
  • Hong Kong: Provides sector-specific guidance, particularly in finance, with an emphasis on consumer protection and ethical AI use. The Hong Kong Monetary Authority has issued AI guidelines focusing on financial consumer protection, while the Securities and Futures Commission expects AI utilising firms to conduct thorough testing and risk assessment. The Privacy Commissioner has expressed concerns over generative AI, especially when using sensitive personal data, advocating for more formal legislation.
  • Japan: Combines non-binding guidelines for AI innovation with mandatory sector-specific restrictions for large platforms. An AI Strategy Council looks at ways to promote and regulate AI.
  • Russia: The National AI Center is reviewing AI regulations amongst its other responsibilities
  • South Korea: Passed a bill promoting AI industry and ethical guidelines, with ongoing development of AI-specific policies.

The regulatory approaches to AI across different regions are diverse and evolving. The EU's AI Act establishes a centralised and comprehensive regulatory framework, while the UK's approach is guided by overarching principles implemented by sector-specific regulators. The US has a more fragmented, sector-specific approach without a unified set of AI principles. Globally, from Africa's draft national strategies to Asia's varied responses, each region's approach reflects its unique priorities and challenges in AI governance. These developments highlight the importance of adapting to rapid technological advances and the need for ongoing assessment and refinement of AI regulations worldwide.

References : 

Five things you need to know about the EU’s new AI Act

Regulating AI in the UK

UK sets out proposals for new AI rulebook to unleash innovation and boost public trust in the technology

UK's Approach to Regulating the Use of Artificial Intelligence

What’s next for AI regulation in 2024? 

2024 is the Year of Radical AI Regulation Across the Globe

Balancing AI Innovation and Regulation: The Potential Impact on U.S. Security Companies and Practitioners 

What to Expect in Evolving U.S. Regulation of Artificial Intelligence in 2024

Congress doc wants states to regulate AI

Supporting responsible AI: discussion paper Australian Government

China enters the debate on generative AI with new draft rules

China releases rules for generative AI

Artificial Intelligence and Automomy in Russia

January 29, 2024

Read our latest

Blog posts