Skip to content
ai-governance-brazil

AI Governance Brazil

Brazil's approach to AI governance balances innovation with ethical considerations. Learn about policies, standards, and compliance requirements.

Brazil’s AI governance centers on the PL 21/2020 draft law, LGPD rules for automated decisions, emerging INMETRO/ABNT standards, and sector guidance for finance, health, and public services.

Balancing innovation and accountability in Latin America's largest digital market

 

Brazil is entering a decisive stage in shaping its national approach to artificial intelligence (AI). With Bill No. 2,338/2023 progressing through Congress, the country is poised to implement one of Latin America's first comprehensive AI laws establishing risk-based requirements for transparency, accountability, and safety.

For organisations deploying or supplying AI-enabled technologies in Brazil, this is the time to prepare governance frameworks that reflect the upcoming obligations while maintaining flexibility for rapid innovation.

 

Why Brazil's AI policy matters

As Latin America's largest economy, Brazil's decisions will shape how AI is regulated across the region. The government has defined AI as a strategic national priority, linking it to innovation, productivity, and digital-transformation goals. At the same time, regulators are increasingly focused on rights protection, algorithmic transparency, and bias mitigation aligning with international models like the EU AI Act while tailoring them to Brazil's local context and institutions.

 

What's new in 2025

  • Bill No. 2,338/2023 (Brazilian AI Act): approved by the Senate in late 2024, now under review by the Chamber of Deputies. It classifies AI systems by risk tier, introduces transparency and accountability obligations, and creates a national oversight framework.
  • ANPD (National Data Protection Authority): expected to act as the lead supervisory body, coordinating with sector regulators for implementation.
  • LGPD still applies: Brazil's General Data Protection Law (Lei Geral de Proteção de Dados) remains the key legal backbone for automated decision-making and personal-data processing in AI contexts.
  • Alignment with global standards: ongoing participation in ISO/IEC 42001 (AI Management Systems) and OECD's AI Policy Observatory ensures compatibility with international benchmarks.

 

Fig 1.0 A layered overview of Brazil’s draft AI regulatory ecosystem showing the interaction between the forthcoming AI Act (Bill No. 2,338/2023), the National Data Protection Authority (ANPD), sectoral regulators, and technical-standards bodies (INMETRO / ABNT) under a unified risk-based governance model.

 

Core features of Brazil's draft AI framework

As Brazil moves closer to adopting its first dedicated AI law, organisations should understand how the proposed framework is structured and what obligations it introduces. The draft law lays out a clear, risk-based foundation designed to promote trustworthy, human-centred AI development.

 

Risk-based structure

The law introduces three categories of AI systems:

 

Category Definition Regulatory Implication
Excessive Risk Systems that manipulate behaviour, exploit vulnerabilities, or enable social-scoring/surveillance. Prohibited.
High Risk Used in critical domains such as finance, healthcare, infrastructure, or public administration. Subject to strict obligations (documentation, transparency, human oversight).
Limited or Low Risk General commercial or consumer applications. Must ensure transparency and safe operation but face lighter controls.

 

Organisational obligations

Under the draft, developers, deployers, and operators must:

  • Maintain risk-assessment documentation for each AI system.
  • Establish human-oversight procedures.
  • Ensure explainability and transparency for automated decisions.
  • Implement incident reporting and bias-mitigation mechanisms.
  • Cooperate with ANPD or sector regulators for audits and corrective actions.

 

Data-protection integration

LGPD continues to shape how AI systems process personal data.

  • Automated decision-making must respect individuals' right to explanation and contestation.
  • Sensitive data (biometric, health, financial) triggers stricter impact assessments.
  • Controllers and processors remain accountable for lawful and proportionate AI use.

 

Sector-specific considerations

Brazil's AI governance will not apply uniformly across all industries. Different sectors face distinct risks, data sensitivities, and supervisory bodies meaning compliance obligations will vary depending on where and how AI is deployed.

Financial Services: The Central Bank and Securities Commission (CVM) require model-governance documentation, explainable credit-decision systems, and fair-lending protocols.

Healthcare: AI used in diagnostics or clinical decision-support must undergo safety and efficacy validation, follow ANVISA's (health-authority) device-classification rules, and comply with LGPD's sensitive-data protections.

Public Sector: AI deployment by government entities must ensure algorithmic transparency, maintain audit trails, and undergo impact assessments for citizen-facing applications.

Technology and Industry: Manufacturers embedding AI into connected products from industrial sensors to consumer devices will need to demonstrate compliance with both AI and cybersecurity standards, following INMETRO/ABNT technical guidance.

 

Fig 2.0 AI risk varies significantly across sectors in Brazil, requiring tailored regulatory approaches.

Our key observations

  • The 2025 AI Bill mirrors the EU's structure but emphasises social inclusion and fairness — unique regional priorities.
  • Integration between AI governance and LGPD data protection will be essential; these regimes will operate in parallel.
  • Market readiness remains low: most organisations lack formal AI inventories or oversight boards, an early compliance opportunity.
  • Cross-recognition of AI certifications (ISO/IEC 42001, INMETRO/ABNT standards) will likely accelerate in 2026 – 2027, creating a route for trusted-AI labelling across Latin America.

 

Building compliance readiness

Even before the law takes effect, organisations should take these steps:

  1. Inventory and classify AI systems operating in Brazil.
  2. Conduct risk assessments aligned with draft law categories.
  3. Develop transparency documentation model purpose, data sources, and decision logic.
  4. Set up human-oversight controls for critical applications.
  5. Integrate LGPD obligations into your AI lifecycle (data minimisation, lawful basis, impact assessments).
  6. Prepare for audits and monitoring by maintaining technical and organisational records.

 

Looking ahead

Brazil's AI Bill marks a turning point for the region: it moves from broad ethical principles to enforceable governance standards. Over 2025, expect:

  • Final adoption and secondary decrees defining sectoral obligations;
  • Designation of ANPD as lead authority and possible creation of an inter-agency AI Coordination Council
  • Pilot regulatory sandboxes for innovation in high-impact sectors like fintech, healthtech, and smart cities
  • Regional harmonisation, as Chile, Argentina, and Mexico follow similar risk-based models.

Organisations that embed governance now will gain a long-term competitive edge turning compliance into trust and market advantage.

 

Conclusion

Brazil's AI governance journey shows a clear shift from voluntary ethics to binding, enforceable regulation. The combination of LGPD, the forthcoming AI Act, and emerging technical standards means AI systems will be judged not only by their performance but by their transparency, fairness, and accountability. For businesses, proactive compliance is not merely about avoiding penalties — it's a chance to strengthen reputation, ensure data integrity, and align with global customers' trust expectations. As Brazil integrates AI across industries, those who act early will lead in both innovation and responsibility.

 

Reach out to us!

To discuss your organisation's AI compliance strategy in Brazil or Latin America, contact our AI Governance Team at Nemko Digital. We help you stay compliant, build digital trust, and unlock innovation responsibly.

Dive further in the AI regulatory landscape

Nemko Digital helps you navigate the regulatory landscape with ease. Contact us to learn how.

Get Started on your AI Governance Journey