AI regulation is moving from debate to law. The EU has passed the world's first comprehensive AI law. India has enacted data protection legislation that directly shapes how AI can operate. The US, UK, China, and dozens of other countries are building their own frameworks. For anyone working with AI — as a developer, deployer, or user — understanding the regulatory landscape is becoming essential practical knowledge.
Regulation shapes what AI companies can build, how they must build it, who they can deploy it to, and what happens when it goes wrong. Getting it wrong in either direction — too loose or too restrictive — has real consequences for innovation, safety, and public trust.
The EU AI Act — the world's first comprehensive AI law
The European Union's AI Act entered into force in August 2024 and is being phased in through 2026. It is the most comprehensive AI regulation in the world and will shape global AI development, because companies that want to operate in Europe — the world's largest single market — must comply.
The EU AI Act takes a risk-based approach, categorising AI systems into four tiers:
Foundation model rules (GPAI)
The EU AI Act also introduced specific rules for General Purpose AI models — large foundation models like GPT-4 and Claude that can be used for many purposes. All GPAI providers must publish training data summaries and comply with EU copyright law. The most powerful models (above a computational threshold) face additional requirements including adversarial testing, incident reporting, and cybersecurity measures.
Penalties for non-compliance are severe: up to €35 million or 7% of global annual revenue for the most serious violations — whichever is higher.
India — DPDP Act and the National AI Strategy
India has taken a sector-by-sector approach to AI regulation rather than a comprehensive AI-specific law, but has established significant legal infrastructure:
Digital Personal Data Protection Act (2023)
India's DPDP Act establishes rights for Indian citizens over their personal data and imposes obligations on organisations that process it. Key provisions:
- Consent requirement — organisations must obtain clear, informed consent before collecting and processing personal data
- Purpose limitation — data collected for one purpose cannot be used for another without fresh consent
- Data localisation — certain categories of sensitive data must be stored within India (specific categories still being finalised)
- Rights of data principals — individuals can access their data, correct it, and request deletion
- Data Protection Board — a regulatory body with power to investigate complaints and impose fines up to ₹250 crore
Sector-specific regulation
India's financial regulator (RBI), SEBI, IRDAI, and other sector regulators are developing AI-specific guidance within their domains. AI used in credit decisions, insurance underwriting, and financial advice faces regulatory scrutiny from existing financial regulators even without a comprehensive AI law.
India's AI ambitions
IndiaAI Mission — a ₹10,300 crore government programme — aims to build India's AI capability through compute infrastructure, datasets, and research. India's regulatory approach seeks to enable innovation while managing risks, positioning India as an AI-forward nation rather than a cautious one.
United States — sector-specific and executive action
The US does not have a comprehensive federal AI law. Instead, AI is regulated through a combination of executive action, sector-specific rules, and state law.
- Executive Order on AI (2023) — directed federal agencies to develop AI safety standards, required developers of the most powerful models to share safety test results with the government, and established AI safety institutes. Much of this was modified or reversed by the subsequent administration in 2025.
- Sectoral regulation — existing financial, healthcare, employment, and consumer protection laws apply to AI systems operating in those domains. The FTC has brought enforcement actions against deceptive AI practices under existing consumer protection authority.
- State laws — Colorado, Illinois, and several other states have passed AI-specific legislation on issues like hiring algorithms, facial recognition, and AI in insurance. California has been particularly active.
China — active AI regulation
China has moved quickly to regulate specific AI applications, particularly generative AI:
- Generative AI regulations (2023) — require AI-generated content to be labelled, prohibit AI content that undermines state authority or spreads "false information", and mandate security assessments before deployment of generative AI services to the public
- Algorithm recommendation regulations — require transparency in recommendation algorithms, ban certain manipulative practices, and give users the right to opt out
- Deepfake regulations — prohibit deepfakes that damage national interests or individual reputations without consent
UK — principles-based approach
The UK is pursuing a lighter-touch, principles-based approach after Brexit, seeking to position itself as an AI-friendly jurisdiction. Rather than passing AI-specific legislation, it has directed existing sector regulators (the FCA, ICO, CMA, and others) to apply their existing powers to AI in their domains. The UK hosted the first global AI Safety Summit in 2023 at Bletchley Park.
The global patchwork — key tensions
The emerging regulatory landscape creates real challenges for organisations operating internationally:
- Conflicting requirements — what is permissible in one jurisdiction may be prohibited in another. Biometric data use, data localisation, and automated decision-making rules vary significantly across the EU, India, US, and China.
- The Brussels Effect — the EU's comprehensive approach tends to become a de facto global standard because multinational companies implement EU-compliant practices globally rather than maintaining separate systems per jurisdiction.
- Speed mismatch — technology moves faster than legislation. The EU AI Act was drafted before ChatGPT existed and had to be substantially revised to address foundation models. Regulatory frameworks will always lag technological development.
- Enforcement gaps — having laws is not the same as enforcing them. Many jurisdictions lack the technical expertise within regulatory bodies to effectively audit sophisticated AI systems.
Key takeaways
- The EU AI Act is the world's first comprehensive AI law — risk-based, with bans for the highest-risk applications
- High-risk AI (healthcare, credit, employment, law enforcement) faces significant obligations including human oversight and conformity assessments
- India's DPDP Act establishes data rights for Indian citizens and obligations for organisations — directly affecting AI training and deployment
- The US regulates AI sector-by-sector without a comprehensive federal law — state laws are filling some gaps
- China regulates specific AI applications aggressively, particularly generative AI and content recommendation
- The global patchwork creates compliance complexity for international organisations — the EU AI Act is becoming a de facto global standard