Skip to content
Italy national AI Law
Shruti KakadeOct 24, 2025 10:30:02 AM7 min read

Italy’s National AI Law: Towards Sectoral AI Governance

​Italy becomes the first EU Member State to translate the AI Act into national law. Law 132/2025 introduces sector-specific rules, criminal liability updates, and a €1 billion AI investment fund, setting a benchmark for responsible, human-centred AI governance.

 

Overview

On 17 September 2025, the Italian Parliament approved Law No. 132/2025, the first dedicated national AI law in the European Union. Entering into force on 10 October 2025, the legislation reinforces rather than duplicates the EU AI Act (Regulation 2024/1689). The framework sets the core principles governing AI and its use, establishes the responsibilities for AI governance and supervision in Italy directly into Italy’s legal system and introduces detailed, sector-specific provisions covering healthcare, labour, public administration, the justice system.

 

Italy’s AI Law - Core principles

 

 

Human-Centred AI

AI must enhance but not replace human decision-making. Systems must operate consistently with fundamental and constitutional rights, respecting transparency, fairness, privacy, gender equality, and sustainability. Continuous human oversight remains mandatory across the AI lifecycle.

 

Data Protection and Transparency

AI systems must process data lawfully and fairly, informing users, clearly and accessibly, about data use, risks, and opt-out rights. The law safeguards freedom of expression and media pluralism, prohibiting algorithmic bias or information distortion.

 

Safeguarding Democracy

AI applications that interfere with democratic institutions or distort public debate are explicitly banned, addressing risks of disinformation and opinion manipulation.

 

AI as an Economic Lever

Public authorities will promote AI adoption, particularly among SMEs, to strengthen productivity, industrial competitiveness, and innovation across Italy’s economy.

Figure 1. Core principles underscoring governing AI and its use under Italy’s Law no.132/2025, the “AI Law”

 

Sector-specific rules

Italy’s AI Law translates the EU AI Act’s broad principles into concrete sectoral requirements, ensuring that the use of AI across critical domains remains transparent, ethically grounded, and under human control.

In healthcare, AI technologies may support prevention, diagnosis, and treatment but can never replace medical professionals. Patients must always be informed when AI tools are used in their care, reinforcing transparency and trust in medical decision-making.

In the labour sector, AI must be deployed to improve working conditions and uphold human dignity, not to monitor or discriminate. Employers are required to disclose any use of AI in recruitment, evaluation, or daily operations, guaranteeing fairness and accountability at the workplace.

Within public administration, AI can help streamline procedures and improve efficiency, but final decisions must remain with human officials. Public bodies are expected to invest in training and organizational safeguards to ensure responsible use of AI in government services.

In the justice system, AI may assist in administrative and analytical tasks, such as managing caseloads or supporting judicial workflows, yet legal interpretation and judgment remain exclusively with judges. The Ministry of Justice oversees AI use in this domain and ensures proper training to raise awareness of both its benefits and risks.

Together, these provisions make Italy’s AI governance framework practical, human-centred, and sector-aware, turning regulatory principles into enforceable operational standards.

 

Ethical AI Integration Across Sectors Italy National AI Law
Figure 2. Sector-specific provisions of Italy’s “AI Law” covering healthcare, labour, public administration, and the justice system.

 

Governance and oversight

The coordinated framework anchors Italy’s AI oversight in transparency and institutional accountability, creating a direct bridge between national governance and the EU-wide regulatory architecture.

 

Authority Role / Responsibility
Presidency of the Council of Ministers Defines the National AI Strategy, updated every two years.
Coordination Committee Provides strategic guidance and oversees AI development in public/private sectors.
Agency for Digital Italy (AgID) Acts as notifying authority, handling conformity assessments and accreditation.
National Cybersecurity Agency (ACN) Serves as surveillance and sanctioning authority and Italy’s EU contact point.
Other Regulators Collaboration with Bank of Italy, CONSOB, IVASS, AGCOM, and Garante (DPA) to ensure cross-sector alignment.

Table 1. Italian authorities responsible for oversight, coordination, and enforcement of Italy’s


Criminal and IP law reforms

The AI Law introduces new criminal provisions targeting the unlawful creation or dissemination of AI-manipulated content, such as deepfakes, to protect individuals from reputational and privacy harm. It also establishes an offense for unauthorized text-and-data mining (TDM), the automated extraction of digital content, when performed without the rightsholder’s consent or lawful basis.

 

Aggravated Penalties

Criminal penalties are heightened for offenses committed using AI tools, recognizing the amplified harm and complexity these technologies can introduce. This includes increased sanctions for crimes such as market manipulation, fraud, or violations of political rights when facilitated by AI systems.

 

Copyright Clarifications

The law affirms that copyright protection applies only to works of human authorship or those created with meaningful human intellectual contribution. Purely machine-generated content, lacking human creativity, does not qualify for copyright protection under Italian law.

 

TDM Exceptions

While text-and-data mining for AI training is permitted where users have lawful access to content, it remains subject to Articles 70-ter and 70-quarter of Italian Copyright Law. These provisions allow rightsholders to exercise an opt-out mechanism, ensuring their data and works are not used without consent.

 

Jurisdictional Change

To ensure legal consistency, the AI Law centralizes jurisdiction over AI-related disputes within specialized tribunals, excluding the Justice of Peace. This move concentrates technical expertise in higher courts to better address the complexity of AI-driven cases.


Next steps: Government decrees (within 12 Months)

Within twelve months of the law’s entry into force, the Italian government will issue a series of implementing decrees to operationalize and expand the framework established by Law No. 132/2025. These decrees will align national supervisory powers with those provided under the EU AI Act, ensuring consistency in enforcement and oversight. They will also introduce new AI-specific criminal offenses, targeting unsafe or negligent system deployment that could endanger public safety or state security. Furthermore, the decrees will clarify liability and burden of proof rules for damages caused by AI systems, reflecting the complexity of attributing responsibility in automated decision-making. Finally, they will establish procedures for the removal of unlawful AI-generated content and set out proportionate sanctions to deter misuse, creating a robust legal framework that balances innovation with accountability.

 

Implications for companies

Although the Italian AI Law does not introduce additional EU-level compliance obligations, it carries significant organizational and governance implications for companies operating in or through Italy. Businesses are expected to update their Organizational, Management and Control Models (under Legislative Decree 231/2001) to incorporate newly recognized AI-related predicate offenses, ensuring that accountability structures and internal controls address the use of artificial intelligence systems.

In parallel, companies must closely monitor forthcoming government decrees that will define key aspects of liability, compensation, and the burden of proof in cases involving AI-related harm. Contractual frameworks should also be reviewed and updated to specify the allocation of risks, responsibilities, and indemnities arising from AI-driven operations, including provisions for insurance coverage. Failure to address these requirements may expose organizations to administrative and criminal liability for misconduct or negligence linked to AI technologies, underscoring the need for proactive compliance and governance measures.

 

Why it matters: Italy’s role in Europe’s AI governance landscape

Italy’s Law No. 132/2025 marks the first practical translation of the EU AI Act into a national legal system, combining ethical ambition with enforceable detail. It offers a template for Member States seeking to harmonize EU-wide rules with domestic priorities, ensuring human-centred, transparent, and accountable AI.

Moreover, its hybrid model, combining regulatory rigor with industrial strategy positions Italy as a testbed for harmonized AI governance across the EU. As implementation unfolds, it is expected to inform similar initiatives in France, Spain, and Germany, and influence the broader debate on AI liability, criminal accountability, and content moderation in Europe. As Europe moves from AI policy design to implementation, Italy’s example may set the tone for sectoral governance and trust-based innovation across the continent.

 

Key takeaways

  • Italy is first in the EU to enact a national AI law complementing the EU AI Act.
  • The law strengthens sectoral accountability, especially in healthcare, labour, and justice.
  • New offenses and liability rules address AI misuse and manipulated content.
  • €1 billion investment to accelerate responsible AI development.
  • Companies must update compliance models and monitor forthcoming decrees.

 

How we can help

At Nemko Digital, we help organizations navigate the rapidly evolving AI regulatory landscape by translating complex legal requirements into actionable compliance frameworks. Whether your organization is developing AI-driven products or managing AI supply-chain risks, Nemko Digital provides tailored advisory and testing services to ensure your systems meet the highest standards of safety, accountability, and trustworthiness.

avatar
Shruti Kakade
Shruti has actively been involved in projects that advocate for and advance AI Ethics through data-driven research and policy. Before starting her Master's, she worked on interdisciplinary applications of Data science and Analytics. She holds a Master's degree in Data Science for Public Policy from the Hertie School of Governance, Berlin and a bachelor’s degree in Computer Engineering from the Pune Institute of Computer Technology, India.

RELATED ARTICLES