Skip to content
Other regulations

Global AI Regulations

An brief overview of other global AI regulatory developments

Artificial Intelligence (AI) is transforming industries, economies, and societies worldwide. GenAI systems are rapidly evolving, requiring governments and regulatory bodies to foster innovation while ensuring ethical standards, public safety, and accountability.
 
The regulatory landscape for AI, influenced by varying datasets, varies significantly across regions, reflecting diverse priorities, legal traditions, and societal values. This article provides a comprehensive overview of key AI regulatory developments globally, highlighting major frameworks and legislative efforts shaping how AI is governed today.

European Union: Pioneering Comprehensive AI Governance

The European Union (EU) has emerged as a global leader in AI regulation, emphasizing a balanced approach that promotes innovation while safeguarding fundamental rights and ethical principles. Two landmark regulations illustrate this dual focus:

 

Data Governance Act (DGA)

The Data Governance Act, adopted by the EU, aims to enhance trust and facilitate data sharing across sectors by addressing technical and legal barriers. By fostering the creation of common European data spaces, the DGA supports cross-sector collaboration in areas such as health, environment, energy, and finance. This framework encourages both public and private entities to share data securely and responsibly, which is essential for training AI systems on diverse, high-quality datasets. The DGA’s emphasis on transparency and data sovereignty aligns with the EU’s broader digital strategy to empower citizens and businesses while maintaining strict data privacy protections.

 

Digital Services Act (DSA)

Effective from November 16, 2022, and fully applicable across all EU member states since January 1, 2024, the Digital Services Act represents a significant step toward regulating online platforms that deploy AI-driven content moderation and recommendation systems. The DSA imposes stringent obligations on platforms to combat illegal content, hate speech, and disinformation, with heightened responsibilities for large platforms and search engines. This regulation ensures a safer and more transparent online environment, protecting consumer and business rights while holding digital service providers accountable for the societal impacts of their AI-powered algorithms. The EU’s effort in deploying these regulations showcases a proactive approach in key developments for AI governance legislation.

For organizations navigating these regulations, understanding the technical and compliance requirements is crucial. Resources such as the NIST Risk Management Framework (RMF) provide valuable guidance on managing AI risks in alignment with regulatory expectations.

 


United States: State-Level Initiatives and Emerging Federal Discussions

In the United States, AI regulation is currently fragmented, with states taking the lead in enacting laws while federal policymakers deliberate on a cohesive national strategy, including dialogues with the Federal Trade Commission.

 

California’s SB 1047: The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

California has introduced one of the most ambitious AI regulatory bills, SB 1047, which mandates rigorous safety checks for developers of the most powerful AI models before their deployment. Passed by the state legislature on August 28, 2024, the bill requires compliance audits and prohibits the use of AI models that pose critical harm risks. This legislation reflects growing concerns about catastrophic AI failures and the need for proactive risk mitigation. However, it has sparked debate among stakeholders, with proponents emphasizing public safety and critics warning that overly stringent rules could stifle innovation and fragment the U.S. AI ecosystem. The bill awaits Governor Newsom’s decision by September 30, 2024.

The evolving regulatory environment in the U.S. underscores the importance of frameworks like the ISO/IEC 23053 standard on AI systems, which help organizations implement robust AI system safety and quality management practices. The principles reflect a national security approach to AI regulation.

 


Canada: Principle-Based AI Oversight

Canada is advancing in global AI regulations through a principle-based approach, exemplified by the proposed Bill C-27: Artificial Intelligence and Data Act (AIDA). This legislation aims to establish a comprehensive framework for the responsible development and deployment of AI technologies. AIDA focuses on transparency, accountability, and risk management, requiring organizations to assess and mitigate potential harms associated with AI systems. By adopting a flexible, principles-driven strategy, Canada seeks to balance innovation with ethical governance, ensuring AI benefits society while minimizing risks.

Canada’s approach aligns with international efforts to harmonize AI standards and promote responsible AI innovation. For businesses operating in Canada and beyond, understanding these principles is essential for compliance and competitive advantage amidst compliant AI systems.

 


China: Rapid Regulatory Expansion and Content Control

China has taken a proactive stance on AI regulation since 2021, implementing a series of laws targeting digital platforms and AI-generated content. In 2023, China introduced national regulations addressing issues such as deepfakes and misinformation, reflecting concerns about AI’s potential misuse. These regulations impose strict requirements on AI content creators and platform operators to ensure authenticity, transparency, and user protection.

China’s regulatory framework emphasizes state control and social stability, with strong enforcement mechanisms to prevent harmful AI applications. This approach contrasts with Western models but highlights the global recognition of AI’s societal impact and the importance of specific deployment practices. Companies engaging with the Chinese market must navigate these complex rules carefully to ensure compliance and operational continuity.

 


India: Emerging Regulatory Oversight

India’s AI regulatory landscape is evolving rapidly. Initially adopting a hands-off approach, the Indian government shifted course in 2023. The Ministry of Electronics and Information Technology mandated that platforms experimenting with or developing AI tools obtain governmental approval before public release. This move signals India’s intent to exercise greater oversight over AI technologies, balancing innovation with national security and ethical considerations.

India’s regulatory trajectory is still unfolding, with ongoing consultations and policy development expected. Businesses and developers should monitor these changes closely to align with emerging requirements and adhere to the Information Technology Act regulations.

 

Global Overview of AI Regulatory Initiatives by Country/Region

Global AI Regulations Concept

 

Country/Region Status Category Summary of Initiative(s)
African Union (AU) National Strategy + Official Guidelines/Framework Developing continental "AI Framework for Development" promoting harmonized regulations, human rights, fairness.
Argentina Proposed / Draft Law / Bill Under Consideration Bill reportedly introduced focusing on transparency, accountability, data privacy (details limited).
Australia National Strategy + Official Guidelines/Framework Government consultation on "Safe and Responsible AI" (2023), focusing on risk-based approach, potentially mandatory guardrails for high-risk AI.
Bahrain Implemented Comprehensive AI Law Enacted comprehensive AI Law (No. 56 of 2023) covering scope, prohibitions, licensing, liability, penalties. Also AI in Health Law (No. 34 of 2023).
Brazil Proposed / Draft Law / Bill Under Consideration Bill 2338/2023 under discussion, proposing risk-based framework similar to EU AI Act.
Chile Proposed / Draft Law / Bill Under Consideration Bill (Boletín N° 15869-19) introduced focusing on transparency, non-discrimination, human oversight.
Costa Rica National Strategy + Official Guidelines/Framework Unveiled National AI Strategy aiming for ethical and responsible adoption.
Egypt Official Guidelines / Ethics Principles Only Has a National AI Strategy. Specific guidelines mentioned but details limited in sources.
Israel Official Guidelines / Ethics Principles Only Published AI Policy document (Oct 2023) emphasizing risk-based approach, ethics, leveraging existing laws.
Japan National Strategy + Official Guidelines/Framework Flexible, risk-based governance. "AI Strategy 2022," "AI Guidelines for Business" (2024), generative AI guidelines. G7 Hiroshima AI Process lead.
Kazakhstan Official Guidelines / Ethics Principles Only AI part of "Digital Kazakhstan" program. Focus on adoption; existing digital/data laws apply.
Kenya In Discussion / Development Government taskforce developed "AI Practitioners Guide" (2024) as a step towards potential regulation.
Kuwait In Discussion / Development Early stages. Draft National AI Strategy (2025-2028) emphasizes need for legal/regulatory framework.
Mexico Official Guidelines / Ethics Principles Only No specific AI law. Relies on existing laws. Independent "IA2030Mx" coalition developed National AI Agenda.
New Zealand National Strategy + Official Guidelines/Framework Released "AI Framework for the Public Service" for government agencies.
Nigeria In Discussion / Development Committed to regulation (Bletchley signatory). Developing National AI Strategy.
Oman National Strategy + Official Guidelines/Framework National Program for AI (Sep 2024). Draft National AI Policy & Ethics Charter released for consultation (Aug 2024).
Qatar National Strategy + Official Guidelines/Framework National AI Strategy (2019), AI Committee (2021), Secure Usage Guidelines (Feb 2024), Financial Sector AI Guidelines (Sep 2024).
Saudi Arabia National Strategy + Official Guidelines/Framework SDAIA established (2019), National Strategy (2020), AI Ethics Principles (2023), Generative AI Guidelines (2024).
Serbia Official Guidelines / Ethics Principles Only Adopted AI Development Strategy (2020-2025) and established ethical guidelines.
Singapore National Strategy + Official Guidelines/Framework NAIS 2.0 (2023/24). Sectoral approach: Model AI Governance Framework, AI Verify, Generative AI guidelines.
South Africa National Strategy + Official Guidelines/Framework National AI Discussion Document (2024). PC4IR established. Incremental amendments to existing laws (e.g., human health care provider, aviation).
South Korea Passed Law (Not Yet Fully Effective) Comprehensive AI Act passed (2024, effective 2026). Risk-based framework. National Strategy (2019), Ethics Guidelines (2020).
Switzerland In Discussion / Development Analyzing need for specific regulation (report expected end 2024/early 2025). Currently relies on adapting existing laws.
Thailand Official Guidelines / Ethics Principles Only National AI Strategy & Action Plan (Phase 2: 2024-2027), AI Ethics Guidelines (2023).
Türkiye (Turkey) National Strategy + Official Guidelines/Framework National AI Strategy (2021-2025). Considering regulation (EU influence). Leverages existing cybercrime/data laws. Cybersecurity Agency/Action Plans.
Uganda In Discussion / Development No specific legislation. Experts calling for swift regulation (Mar 2025). Potential "Rights-Based Policy Playbook" in development.
Ukraine National Strategy + Official Guidelines/Framework AI Regulation Roadmap (Oct 2023) - soft law approach leading to EU-aligned legislation. Copyright amendments (Dec 2022), Media/Human Rights Guidelines.
United Arab Emirates (UAE) National Strategy + Official Guidelines/Framework National AI Strategy 2031, AI Ministry, Ethics Guidelines (2022), AI Charter (2024), International Policy (2024). AIATC in Abu Dhabi.
United Kingdom (UK) National Strategy + Official Guidelines/Framework "Pro-innovation" approach (White Paper 2023). Relies on existing regulators + 5 principles. Moving towards statutory duties. AI Safety Institute.
Armenia No Specific AI Regulation Found No specific AI regulations or strategies found in search.
Norway No Specific AI Regulation Found Follows EU developments closely. No specific national AI law found beyond EU alignment.
Peru Proposed / Draft Law / Bill Under Consideration Bill (No. 765/2021-CR) introduced, but current status unclear.
Rwanda No Specific AI Regulation Found No specific AI regulations or strategies found in search.
Tunisia No Specific AI Regulation Found No specific AI regulations or strategies found in search.

 


The Broader Context: Why Global AI Regulations Matters

Global AI regulations are not merely a legal obligation but a strategic imperative for organizations worldwide. Effective regulation fosters trust among users, mitigates risks of bias and discrimination, and ensures AI systems operate transparently and safely. As AI technologies become more embedded in critical infrastructure, healthcare, finance, and public services, regulatory compliance, driven by comprehensive federal laws, becomes integral to sustainable innovation.

For companies seeking to implement AI responsibly, frameworks such as the NIST Risk Management Framework and standards like ISO/IEC 23053 offer practical guidance on risk assessment, governance, and ethical AI deployment. Deployers of AI systems must be aware of licensing practices and content disclosures pertinent to differing regulatory environments.

 


Supporting Evidence and Further Reading

The importance of Global AI regulations is widely recognized by global experts and institutions. According to the World Economic Forum, harmonized AI governance frameworks are essential to unlock AI’s full potential while safeguarding human rights. Similarly, the OECD AI Principles provide an international benchmark for trustworthy AI development.

For a detailed analysis of AI risks and governance strategies, the McKinsey Global Institute offers comprehensive research on AI’s economic and societal impacts, emphasizing the role of regulation in shaping AI’s future, considering varying use cases.

 


The Road Ahead: Embracing Responsible AI Innovation

The global AI regulations landscape is complex and rapidly evolving, reflecting diverse national priorities and approaches. From the EU’s comprehensive data and digital services regulations to California's pioneering safety legislation, Canada’s principle-based oversight, China’s content control, and India’s emerging approvals system, each jurisdiction contributes unique perspectives to AI governance.

Organizations must stay informed and agile, leveraging international standards and best practices to navigate this dynamic environment. By doing so, they can harness AI’s transformative power responsibly, ensuring innovation aligns with ethical imperatives and public trust.

For ongoing updates and expert insights on Global AI regulations and standards, the Nemko Digital platform offers a valuable resource hub tailored to technical and business audiences.

Dive further in the AI regulatory landscape

Nemko Digital helps you navigate the regulatory landscape with ease. Contact us to learn how.

Contact Us

Dive further in the AI regulatory landscape

Nemko Digital helps you navigate the regulatory landscape with ease. Contact us to learn how.

Get started on your AI governance journey