AI Regulation Chile
From Principles to Practice: How Chile is shaping Responsible AI Governance
Chile is setting a new benchmark for responsible AI in Latin America with its 2025 risk-based regulatory framework that combines innovation, ethics, and accountability.
In 2025, Chile is advancing one of the region's most comprehensive frameworks for artificial intelligence regulation. The proposed Law Regulating Artificial Intelligence Systems introduces a tiered, risk-based model inspired by the EU AI Act, placing clear obligations on high-risk systems while banning those that pose unacceptable threats to rights and safety. With this approach, Chile aims to promote ethical AI innovation, align with international standards such as ISO/IEC 42001, and establish the country as a regional leader in trustworthy AI governance. For businesses operating in or engaging with Chile, understanding these requirements early is key to building compliance and competitive advantage.
What organisations should know now
Chile is continuing to take a leading role in Latin America for artificial intelligence (AI) regulation. Its proposed law on AI (Boletin 16821-19is moving through Congress and reflects a risk-based approach, aligned with global trends, but tailored for Chile's context. Below is a breakdown of the key elements, what's changed, and what you should be doing if you deploy or develop AI systems that affect Chile.
Why Chile matters
In May 2024, the Chilean government introduced the Law Regulating Artificial Intelligence Systems, a landmark proposal that establishes a risk-based framework for governing artificial intelligence. The bill draws inspiration from the European Union's AI Act and the ethical principles set out by the United Nations Educational, Scientific and Cultural Organization (UNESCO). By combining global best practices with national priorities, Chile aims to strike a balance between technological innovation and the protection of fundamental rights. This initiative positions the country as a regional leader in developing transparent, accountable, and trustworthy AI systems across Latin America.

What the law proposes - key features
Risk-based classification of AI systems
The bill defines four broad risk categories, with obligations increasing as risk level rises.
- Unacceptable risk: These AI systems are banned. They include applications that infringe on fundamental rights, such as those involving manipulation of behaviour, indiscriminate biometric categorisation, or social-scoring systems.
- High risk: Systems that may significantly affect rights, safety, health, fundamental constitutional rights, consumer protection, etc. These will face specific obligations for documentation, governance, transparency and oversight.
- Limited risk: Systems that have lower potential for harm, but still warrant transparency obligations (e.g. a chatbot interfacing with citizens, or generative AI in some contexts).
- No-evident risk: AI systems that pose minimal or no foreseeable risk to rights or safety, and therefore are subject to minimal regulatory intervention.

Compliance and governance obligations for high-risk systems
Organizations developing or using artificial intelligence in Chile must follow several key steps to meet the proposed legal requirements:
- They should begin with a thorough risk assessment to identify how each AI system could affect fundamental rights, safety, or fairness.
- All stages of AI design, development, deployment, and maintenance must be carefully documented and recorded to ensure traceability.
- Users and individuals affected by AI systems should be clearly informed when they are interacting with such technologies and must be able to understand their purpose, capabilities, and limitations.
- Human oversight is essential - AI systems cannot function entirely on their own without meaningful human supervision where necessary.
- Clear governance structures should be established so that roles and responsibilities for deploying or operating AI are well defined.
- Finally, the proposal encourages innovation through controlled testing environments, often referred to as regulatory sandboxes, that allow safe experimentation while maintaining oversight.
Institutional and enforcement framework
The proposed law establishes a future oversight authority, expected to be the Personal Data Protection Agency or an equivalent body, which will be responsible for supervising compliance and enforcing sanctions in cases of non-compliance. Alongside this authority, an Advisory Council or Technical Advisory Board will bring together representatives from government, academia, industry, and civil society to help define which AI systems fall under high-risk or limited-risk categories and to provide ongoing regulatory guidance. The bill also extends its reach beyond national borders, meaning that foreign organizations offering AI systems or services whose outputs are used within Chile will also be subject to its requirements.
Alignment with international standards
The Chilean proposal references international guidelines such as UNESCO's Recommendation, the OECD Principles on AI, and supports alignment with standards such as ISO/IEC series for AI management. The framework therefore can help multinational organisations maintain compliance across borders. It is also evident that the risk based approach aligns with that of the EU AI Act, showcasing an example of the Brussels Effect in AI regulation.
What's new for 2025 / What to watch
Since the bill's introduction, several developments and issues have become more important:
- The bill was approved by the Chamber of Deputies in August 2025 and has now advanced to review by the Senate.
- Stakeholders have raised concerns about Chile's limited institutional capacity to oversee AI systems at scale and the lack of clarity around liability when AI causes harm.
- Generative AI applications, including large-scale model training and the creation of synthetic content such as deepfakes, are under closer scrutiny due to potential copyright and data-mining issues.
- There is an ongoing debate about balancing innovation and regulation, as Chile aims to promote AI development particularly for small and medium-sized enterprises without imposing excessive compliance burdens.
Practical advice for businesses and developers
- Start a gap analysis to assess how your current AI systems align with Chile's proposed regulatory framework.
- Classify each AI system according to Chile's four risk levels: unacceptable, high, limited, or no-evident risk.
- Review and update your documentation, governance structures, and risk assessments to address any gaps.
- Set up clear governance and assign roles for AI compliance within your organization.
- Define accountability by appointing responsible personnel such as an AI Risk Lead, Data Protection Officer, or Head of AI Governance.
- Ensure human oversight is built into the design and operation of all high-risk AI systems.
- Conduct regular testing, validation, bias and fairness audits, and cybersecurity assessments for high-risk systems.
- Maintain audit trails and detailed records of all development and deployment decisions.
- Inform users whenever they interact with an AI system and clearly communicate its capabilities, limitations, and decision logic.
- Label synthetic or AI-generated content transparently and obtain consent where necessary.
- Verify that training and operational datasets respect privacy, minimize bias, and comply with data protection and copyright requirements.
- Implement internal policies to safeguard fairness, non-discrimination, and human rights throughout the AI lifecycle.
- Prepare for future oversight by ensuring readiness to demonstrate compliance to Chile's forthcoming supervisory authority.
- Establish continuous monitoring and review mechanisms to maintain compliance as regulations evolve.
- Treat compliance as an opportunity, ethical, transparent AI practices can serve as a market differentiator in Chile.
- Stay updated on regulatory developments and revisions to Chile's risk classification lists for AI systems.
Why it matters for you globally
Even for organizations based outside Chile, the country's proposed AI regulation is highly relevant. The law applies to any company whose artificial intelligence systems are deployed in Chile or produce outputs used by individuals or organizations within its territory. Chile's framework reflects a broader global movement toward risk-based regulation of AI, emphasizing accountability, transparency, and human oversight. Aligning with these principles now can strengthen an organization's governance structure and prepare it for future regulatory developments in other regions. Building a mature compliance framework for Chile not only ensures readiness for local requirements but also provides a strong foundation for expanding into other Latin American markets that are expected to follow similar regulatory paths.
Final thoughts
Chile's proposed AI regulation is not yet final, but the direction is clear: risk-based, rights-centred, aligned with international principles, and designed for both innovation and protection. For organisations, the key is to move from reactive to proactive: embed AI governance now, ahead of formal enforcement. The early movers will benefit.
At Nemko Digital, we help organisations navigate this evolving landscape — from assessing current AI systems, implementing governance, aligning with international standards, to preparing for audits and certification. If you'd like to review your Chile AI compliance readiness, we'd be happy to discuss.
Dive further in the AI regulatory landscape
Nemko Digital helps you navigate the regulatory landscape with ease. Contact us to learn how.

