Skip to content
Singapore AI Regulation & Policy

Singapore AI Regulation & Policy

Singapore's AI regulation overview. Learn about AI policy framework, ethics guidelines & compliance strategies for Southeast Asia's AI hub.

Singapore governs AI through voluntary Model and Generative AI frameworks, PDPC/IMDA guidance, and tools like AI Verify and ISAGO, with MAS sector rules, under the National AI Strategy 2.0.

​Singapore’s AI Regulation Overview

Learn about Singapore’s evolving AI policy framework, ethics guidelines, and compliance strategies as the city-state strengthens its position as Southeast Asia’s trusted AI hub.

 

Singapore’s Governance Philosophy

Singapore governs AI through a voluntary model, balancing innovation with accountability via non-binding frameworks, assurance tools, and sector-specific guidance. By 2025, this approach integrates AI Verify Singapore’s government-developed AI testing toolkit, ISAGO, a self-assessment guide for operationalizing ethical AI governance, and the AI Verify Foundation, an open-source community advancing global assurance standards. Together, these mechanisms form a coherent ecosystem that promotes responsible AI while maintaining flexibility for innovation and cross-border collaboration, establishing Singapore as a global reference point for trusted AI development.

 

National Artificial Intelligence Strategy 2.0 (NAIS 2.0)

Launched in December 2023 and now under active implementation, NAIS 2.0 allocates more than SGD 1 billion over five years to advance AI capabilities, digital trust, and workforce readiness. Under the National AI Strategy 2.0, Singapore’s 2025 implementation focuses on three strategic pillars designed to anchor the nation’s leadership in trusted AI:

  1. Compute and Data Infrastructure: Singapore is scaling national compute capacity through the new AI Compute Cluster (AICC), enabling large-scale model training and research collaboration across academia, industry, and government.
  2. Talent and Skills: To build deep technical and governance expertise, the government has introduced new AI governance and assurance training programmes, including professional certification pathways and joint industry–academic initiatives led by IMDA and SkillsFuture Singapore. These programmes focus on workforce upskilling in AI assurance, risk management, and ethical deployment, ensuring a strong pipeline of talent to support Singapore’s trusted AI ecosystem.
  3. Assurance and Governance: Singapore is embedding AI Verify and ISAGO into public-sector procurement standards, making responsible AI governance a foundational requirement for all government-linked projects and a model for private-sector adoption.

NAIS 2.0 positions Singapore as a trust anchor, aligning technological leadership with transparent governance and public confidence.

 

Model AI Governance Framework & Generative AI Extension

The Model AI Governance Framework (Model Framework) remains the cornerstone of Singapore’s approach, providing practical guidance on explainability, fairness, human oversight, and accountability. In 2024, Singapore introduced the Model AI Governance Framework for Generative AI, developed with input from over 70 global organizations (including OpenAI, Google, Microsoft, and Anthropic).

Singapore AI Regulation Framework
Figure 1. Foundation of Responsible AI represented in Singapore’s Model AI Governance Framework.

 

In 2025, these principles are being translated into reference assurance criteria aligned with the OECD AI Principles (2024) and the GPAI Code of Practice on Generative AI, allowing interoperability with EU, UK, and US assurance models.

 

Implementation Tools & Testing Ecosystem

To translate its governance principles into measurable practice, Singapore has built a robust ecosystem of implementation tools and testing mechanisms that enable organizations to evaluate, verify, and continuously improve the trustworthiness of their AI systems.

 

ISAGO 2.0 - Implementation and Self-Assessment Guide for Organisations

The 2025 update (ISAGO 2.0) integrates with AI Verify for a seamless governance-to-testing workflow, helping companies map AI risk tiers and governance maturity, build stakeholder communication plans, conduct internal audits using standardized metrics.

 

AI Verify Toolkit & AI Verify Foundation

AI Verify remains the world’s first government-developed AI testing toolkit combining technical tests with process checks. The AI Verify Foundation, created in 2023, has expanded to 90+ member organizations by 2025 and now maintains a Global Model Evaluation Toolkit for large-language and multimodal models. The Foundation collaborates with OECD and GPAI to harmonize testing and assurance standards, ensuring global recognition of AI Verify assessments.

 

Institutional Landscape: Coordinating Singapore’s AI Governance Ecosystem

Singapore’s AI governance model is supported by a multi-agency ecosystem that combines regulatory oversight, policy development, and technical assurance. Each institution plays a distinct but interconnected role in ensuring that AI innovation advances responsibly and in alignment with national priorities. The Info-communications Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC) lead regulatory and ethical governance, while the Monetary Authority of Singapore (MAS) supervises AI use in financial services. Research and cross-sector coordination are anchored by the Centre for AI & Data Governance (CAIDG), and the AI Verify Foundation provides global testing and assurance infrastructure. Together, these entities ensure that Singapore’s AI governance remains coherent, adaptive, and internationally trusted.

 

Institution / Tool Function 2025 Update
Info-communications Media Development Authority (IMDA) Leads national AI policy, digital-trust regulation, and innovation enablement Hosts the AI Safety Institute (Singapore) for high-impact and generative model evaluation
Personal Data Protection Commission (PDPC) Oversees data protection, privacy, and AI ethics integration under the Personal Data Protection Act (PDPA) Incorporates AI governance criteria into PDPA audits and compliance assessments
Monetary Authority of Singapore (MAS) Supervises AI deployment in financial services through the FEAT (Fairness, Ethics, Accountability, Transparency) Principles Expanding oversight to cover generative AI applications and automated decision systems
Centre for AI and Data Governance (CAIDG) Policy-research hub under Singapore Management University, providing academic and policy coordination Leads ASEAN AI governance collaboration and supports research on cross-border regulatory models
AI Verify Foundation (AIVF) Open-source multi-stakeholder consortium advancing AI testing and assurance Launched the LLM Evaluation Toolkit (2025) and expanding interoperability with OECD and GPAI frameworks
Implementation and Self-Assessment Guide for Organisations (ISAGO 2.0) Organisational self-assessment and governance maturity tool aligned with the Model AI Governance Framework Integrated with AI Verify reports for unified assurance and continuous compliance monitoring

 

Regional & Global Cooperation

Singapore’s governance model is increasingly international in scope. Building on its reputation as a neutral and innovation-friendly hub, the city-state actively shapes global AI standards through regional coordination, cross-border testing collaborations, and multilateral partnerships that promote interoperability and mutual recognition of AI assurance practices.

 

ASEAN AI Governance Guidelines (2024)

The Association of Southeast Asian Nations (ASEAN), a regional bloc of ten member states promoting economic, political, and technological cooperation has begun aligning on common approaches to AI governance. Co-led by Singapore, these voluntary ASEAN AI Governance Guidelines promote a consistent regional baseline for responsible AI use, mirroring the principles of Singapore’s Model AI Governance Framework while allowing flexibility for national contexts.

 

Digital Forum of Small States (DFOSS) AI Playbook (2025)

The Digital Forum of Small States (Digital FOSS or DFOSS) is a collaborative network launched in 2022 that brings together digital policy leaders from smaller nations to share governance approaches, co-develop capacity, and amplify their voice in global technology discourse. In September 2024, Singapore (via IMDA) and Rwanda introduced the first AI Playbook for Small States, co-curated through DFOSS. The Playbook aggregates best practices, case studies, and governance templates from member states, covering topics such as AI strategy formulation, infrastructure and talent building, regulatory design, and societal impact assessment. Designed as a living, open document, the Playbook encourages continuous updates and contributions, so small states can adapt it to evolving AI norms and leverage shared lessons to accelerate their AI governance maturity.

 

Global Engagement

Singapore plays an increasingly visible role in shaping international AI governance. It contributes actively to the Global Partnership on Artificial Intelligence (GPAI), the OECD AI Policy Observatory, and the Global AI Safety Summits, promoting interoperability between Eastern and Western regulatory systems. Through these platforms, Singapore champions assurance-based, innovation-friendly approaches to AI oversight and works toward mutual recognition of testing, certification, and risk-management practices across jurisdictions.

 

High-Impact and Generative AI Oversight

Singapore’s regulatory focus in 2025 extends to the governance of high-impact and generative AI systems, emphasizing robust safety testing, accountability mechanisms, and controlled experimentation environments to ensure that powerful models are deployed responsibly across critical sectors.

 

AI Safety Institute (Singapore)

Established in 2025 under IMDA, this Institute conducts safety and alignment testing for high-impact and generative AI models, working in tandem with AI Verify Foundation on evaluation protocols.

 

Generative AI Evaluation Sandbox

The Generative AI Evaluation Sandbox provides controlled testing environments and guidance for experimental AI deployments, ensuring safety validation before market release.

Figure 2. Key features in Singapore’s Generative AI Evaluation Sandbox

 

Benefits of Singapore’s Approach

Singapore’s balanced approach to AI governance demonstrates that strong ethical safeguards and innovation can coexist, creating a regulatory environment that fosters trust, attracts investment, and accelerates responsible AI adoption across industries. Together, they succeeded at:

  1. Encouraging Responsible Innovation: Through clear guidance that reduces regulatory uncertainty and fosters confidence for startups and multinationals alike.
  2. Safeguarding Data Privacy: By Integrating their frameworks and guidelines with the national’s Personal Data Protection Act (PDPA) to ensure privacy-by-design while enabling data-driven AI development.
  3. Maintaining Global Competitiveness: Singapore’s interoperability with global standards attracts cross-border AI investment and enables international certification pathways.

 

Future of AI Regulation in Singapore

Singapore’s next stage focuses on AI assurance and international interoperability:

  • AI Assurance Framework (2026 planned): to unify technical, organizational, and ethical testing criteria.
  • Quantum & Autonomous AI Governance: early research guidelines under Centre for Artificial Intelligence and Data Governance (CAIDG).
  • Cross-Border AI Service Standards: enabling trusted data and model transfer across ASEAN.

This evolution will keep Singapore at the forefront of practical, innovation-friendly AI governance.

 

Frequently Asked Questions

What is Singapore’s Generative AI policy?

It follows the 2024 Model AI Governance Framework for Generative AI, updated 2025 with OECD and GPAI assurance criteria to manage LLM and multimodal risks.

 
How does Singapore ensure accountability in AI systems?

Through organisational governance under ISAGO 2.0 and technical testing with AI Verify and AI Safety Institute standards.

 

What are Singapore’s core AI ethics principles?

Transparency, fairness, human-centricity, and explainability — translated into operational measures under the Model Framework.

 

What is the role of the AI Verify Foundation globally?

It anchors Singapore’s contribution to global assurance interoperability, co-developing test criteria with OECD and GPAI.

 

How does Singapore cooperate internationally?

Via ASEAN Guidelines, DFOSS Playbook, and participation in Global AI Safety Summits, positioning Singapore as a bridge between regulatory models.

 

Implementation Priorities for Firms looking to align with Singapore’s AI initiatives

To effectively align with Singapore’s AI governance ecosystem, organizations must move beyond compliance checklists and embed responsible AI principles into daily operations. The government’s frameworks, anchored in the Model AI Governance Framework, AI Verify, and ISAGO, offer a practical pathway for doing so. Firms that proactively integrate these tools and practices not only meet ethical and regulatory expectations but also gain a competitive advantage in international markets where trust and transparency are becoming core differentiators.

  1. Establish Governance Frameworks:
    Create clear accountability structures and risk-classification processes aligned with Singapore’s Model AI Governance Framework to ensure ethical oversight and transparent decision-making.
  2. Adopt Testing Protocols:
    Use AI Verify and relevant sector-specific tools to evaluate model fairness, robustness, and transparency before deployment and throughout the AI lifecycle.
  3. Develop Stakeholder Transparency Plans:
    Implement communication measures—such as user disclosures, explainability summaries, and feedback mechanisms—to maintain public trust and regulatory confidence.
  4. Conduct Continuous Self-Assessment:
    Regularly apply ISAGO 2.0 to assess governance maturity, document improvements, and maintain continuous alignment with evolving national and international AI assurance standards.

 

Outlook: Assurance as the Next Regulatory Frontier

Singapore’s evolution from guidelines to assurance represents the next frontier in global AI regulation. Its hybrid model, where voluntary governance frameworks are reinforced by verifiable technical testing, moves beyond principles toward measurable accountability. This assurance-first approach safeguards public trust, enhances global interoperability, and accelerates market access for AI solutions developed under transparent, tested, and trusted conditions. By 2025, Singapore stands as the one of the top international benchmark for harmonized AI assurance, proving that innovation and accountability can evolve in tandem.

 

Accelerate Your AI Compliance Journey

Singapore’s pioneering governance model creates unprecedented opportunities for organizations ready to embrace responsible innovation. Whether developing AI systems or implementing robust governance frameworks, expert guidance ensures both compliance and competitive advantage. At Nemko Digital, our specialists help organizations translate regulatory principles into practice, supporting every stage of your AI compliance journey, from framework assessment and AI Verify testing to ongoing assurance, monitoring, and optimization. Transform your AI strategy with confidence. Partner with Singapore’s leading AI governance experts to develop a tailored regulatory roadmap that positions your organization at the forefront of responsible, future-ready AI innovation in Southeast Asia’s most dynamic technology hub. '''

Dive further in the AI regulatory landscape

Nemko Digital helps you navigate the regulatory landscape with ease. Contact us to learn how.

Get Started on your AI Governance Journey