AI regulation is moving from debate to law. The EU has passed the world's first comprehensive AI law. India has enacted data protection legislation that directly shapes how AI can operate. The US, UK, China, and dozens of other countries are building their own frameworks. For anyone working with AI — as a developer, deployer, or user — understanding the regulatory landscape is becoming essential practical knowledge.

Why regulation matters

Regulation shapes what AI companies can build, how they must build it, who they can deploy it to, and what happens when it goes wrong. Getting it wrong in either direction — too loose or too restrictive — has real consequences for innovation, safety, and public trust.

The EU AI Act — the world's first comprehensive AI law

The European Union's AI Act entered into force in August 2024 and is being phased in through 2026. It is the most comprehensive AI regulation in the world and will shape global AI development, because companies that want to operate in Europe — the world's largest single market — must comply.

The EU AI Act takes a risk-based approach, categorising AI systems into four tiers:

Unacceptable risk
Banned entirely. Includes: real-time biometric surveillance in public spaces (with narrow law enforcement exceptions), social scoring systems by governments, AI that manipulates people through subliminal techniques, and AI that exploits vulnerable groups. These applications are prohibited regardless of purpose.
High risk
Permitted but heavily regulated. Applies to AI in: healthcare (diagnostic tools, surgical robots), education (assessment systems), employment (hiring, performance monitoring), credit and insurance, law enforcement, migration and border control, and critical infrastructure. Requires conformity assessments, human oversight, transparency, and registration in an EU database before deployment.
Limited risk
Transparency obligations apply. Chatbots must disclose they are AI. Deepfake content must be labelled. Users have a right to know when they are interacting with AI.
Minimal risk
No specific obligations. Spam filters, AI in video games, recommendation systems. The vast majority of current AI applications fall here.

Foundation model rules (GPAI)

The EU AI Act also introduced specific rules for General Purpose AI models — large foundation models like GPT-4 and Claude that can be used for many purposes. All GPAI providers must publish training data summaries and comply with EU copyright law. The most powerful models (above a computational threshold) face additional requirements including adversarial testing, incident reporting, and cybersecurity measures.

Penalties for non-compliance are severe: up to €35 million or 7% of global annual revenue for the most serious violations — whichever is higher.

India — DPDP Act and the National AI Strategy

India has taken a sector-by-sector approach to AI regulation rather than a comprehensive AI-specific law, but has established significant legal infrastructure:

Digital Personal Data Protection Act (2023)

India's DPDP Act establishes rights for Indian citizens over their personal data and imposes obligations on organisations that process it. Key provisions:

Sector-specific regulation

India's financial regulator (RBI), SEBI, IRDAI, and other sector regulators are developing AI-specific guidance within their domains. AI used in credit decisions, insurance underwriting, and financial advice faces regulatory scrutiny from existing financial regulators even without a comprehensive AI law.

India's AI ambitions

IndiaAI Mission — a ₹10,300 crore government programme — aims to build India's AI capability through compute infrastructure, datasets, and research. India's regulatory approach seeks to enable innovation while managing risks, positioning India as an AI-forward nation rather than a cautious one.

United States — sector-specific and executive action

The US does not have a comprehensive federal AI law. Instead, AI is regulated through a combination of executive action, sector-specific rules, and state law.

China — active AI regulation

China has moved quickly to regulate specific AI applications, particularly generative AI:

UK — principles-based approach

The UK is pursuing a lighter-touch, principles-based approach after Brexit, seeking to position itself as an AI-friendly jurisdiction. Rather than passing AI-specific legislation, it has directed existing sector regulators (the FCA, ICO, CMA, and others) to apply their existing powers to AI in their domains. The UK hosted the first global AI Safety Summit in 2023 at Bletchley Park.

The global patchwork — key tensions

The emerging regulatory landscape creates real challenges for organisations operating internationally:

Key takeaways

  • The EU AI Act is the world's first comprehensive AI law — risk-based, with bans for the highest-risk applications
  • High-risk AI (healthcare, credit, employment, law enforcement) faces significant obligations including human oversight and conformity assessments
  • India's DPDP Act establishes data rights for Indian citizens and obligations for organisations — directly affecting AI training and deployment
  • The US regulates AI sector-by-sector without a comprehensive federal law — state laws are filling some gaps
  • China regulates specific AI applications aggressively, particularly generative AI and content recommendation
  • The global patchwork creates compliance complexity for international organisations — the EU AI Act is becoming a de facto global standard