Nemko Digital Insights

GPAI Code of Practice: A New Guide for the EU AI Act

Written by Mónica Fernández Peñalver | July 16, 2025

The EU AI Office has released the GPAI Code of Practice, establishing mandatory guidelines for general-purpose AI model providers operating within the European Union. This groundbreaking framework addresses systemic risks while promoting transparency and innovation in GPAI development.

The European Commission's latest regulatory milestone represents a pivotal moment for AI governance globally. As organizations worldwide grapple with rapidly evolving AI capabilities, the GPAI Code of Practice provides a structured approach to ensuring responsible AI development while maintaining competitive advantage.

 

Significance and Objectives of the GPAI Code of Practice

 

Ensuring Compliance with EU AI Act Requirements

The General-Purpose AI Code of Practice serves as the primary compliance mechanism for AI model providers under the EU AI Act. This comprehensive framework establishes clear obligations for GPAI models that exceed specific computational thresholds, ensuring systematic risk management across the AI ecosystem.

The key chapters of the Code are on:

  • Transparency
  • Copyright
  • Safety and security

 

As general-purpose AI models form the foundation of many AI systems in the EU, the AI Act supports providers in ensuring transparency. This helps them integrate such models effectively into their products. The Code’s Transparency chapter includes a user-friendly Model Documentation Form, allowing providers to record all required information in one place.

The Copyright chapter offers practical guidance to help providers establish a compliant policy under EU copyright law.

Some general-purpose AI models may pose systemic risks—such as threats to fundamental rights and safety, enabling the creation of chemical or biological weapons, or a potential loss of control over the model. The AI Act requires providers to identify and reduce such risks. The Safety and Security chapter outlines leading practices for managing these systemic risks.

 

Helping Providers Stay Transparent Under the AI Act

The Transparency chapter of the Code of Practice outlines three key actions that Signatories commit to. These actions help meet the transparency requirements set out in Article 53(1) of the AI Act and its Annexes XI and XII.

To support compliance, they have developed a user-friendly Model Documentation Form. This form helps Signatories gather all the required information in one place—saving time and effort.

Each item in the form clearly shows who the information is for: downstream providers, the AI Office, or national authorities. Information meant for the AI Office or national authorities will only be shared upon formal request, including the legal basis and purpose. These requests will be limited to the information needed at that time to carry out official responsibilities—such as assessing compliance with the AI Act, particularly when high-risk AI systems are built on general-purpose models provided by a different party.

As required by Article 78 of the AI Act, any recipient of the information must keep it confidential. This includes protecting intellectual property, trade secrets, and other sensitive business data, as well as applying strong cybersecurity measures to keep the information secure.

 

Helping AI Providers Comply with Copyright Law

The Copyright chapter of the Code of Practice helps providers of general-purpose AI models meet their obligations under Article 53(1)(c) of the AI Act. This includes putting in place a clear copyright policy and taking practical steps to respect EU copyright and related rights law.

To support this, the chapter outlines a set of concrete commitments that Signatories agree to implement:

  • Create and Maintain a Copyright Policy
    Providers must develop, update, and enforce a copyright policy that covers how their models are trained and deployed in line with EU copyright law.
  • Use Only Lawfully Accessible Content
    When using web crawlers for training data, providers must avoid content behind paywalls or technological protections and steer clear of websites flagged by courts or authorities as persistent copyright infringers.
  • Respect Rights Reservations
    Providers must ensure their crawlers can recognize and follow machine-readable signals—like robots.txt files—that indicate a rightsholder has reserved their rights under EU law.
  • Prevent Infringing Outputs
    To reduce the risk of copyright violations, providers must implement safeguards that stop models from generating infringing outputs and must clearly prohibit such uses in their terms and conditions.
  • Enable Complaints and Transparency
    Providers are required to designate a point of contact and make it easy for rightsholders to file complaints if they believe their rights have been violated. Complaints must be handled fairly and promptly.

 

This chapter also encourages transparency, responsible AI development, and collaboration with stakeholders to shape practical standards for copyright compliance in AI systems.

 

Managing Safety and Security Risks of General-Purpose AI

The Safety and Security chapter of the Code of Practice supports AI providers in meeting their responsibilities under the AI Act—particularly when it comes to preventing harm and ensuring trustworthy AI.

This chapter lays out practical steps for identifying and mitigating potential risks, especially systemic ones that may affect public safety, human rights, or the environment.

Here are the key commitments providers agree to:

  • Adopt a Risk-Based Approach
    Providers must identify risks early and assess their impact based on the intended and reasonably foreseeable use of their models. This includes evaluating how their models could be misused or cause unintended harm.
  • Apply State-of-the-Art Mitigation Measures
    Providers are expected to implement appropriate safeguards to reduce identified risks. These could include content filters, usage restrictions, and other technical and organizational controls tailored to the model’s capabilities.
  • Monitor and Update Continuously
    Risk management doesn’t stop at deployment. Providers must continuously monitor how their models are used, collect feedback, and update risk assessments and mitigation strategies over time.
  • Be Transparent and Accountable
    Providers must document their risk management processes and be ready to share relevant information with authorities, especially in high-risk scenarios.
  • Prepare for Emergencies
    Providers are encouraged to have mechanisms in place to respond to serious incidents—such as security breaches or misuse of their models—and to notify competent authorities when necessary.

 

By following these practices, AI developers can help ensure their systems are safe, secure, and aligned with EU values—fostering public trust in the rapidly evolving AI landscape.

 

Drafting Process of the GPAI Code

 

Role of the Working Group and Stakeholder Engagement

The European AI Office established a multi-stakeholder working group comprising industry experts, academic researchers, and civil society representatives. This collaborative approach ensured the GPAI Code of Practice reflects diverse perspectives while maintaining practical applicability for AI model providers.

 

Focus on Transparency and Open Consultation

The drafting process emphasized transparent consultation with affected stakeholders. The AI Office conducted extensive public consultations, gathering input from GPAI model providers, researchers, and advocacy groups to ensure comprehensive coverage of emerging risks and challenges.

 

Copyright Adherence and Intellectual Property Protection

Copyright law compliance represents a significant focus area within the Code. The framework establishes clear requirements for Text and Data Mining practices, copyright-infringing output prevention, and adherence to Robot Exclusion Protocol standards. These provisions ensure GPAI development respects existing intellectual property rights while enabling innovation.

 

Competitive Advantages of Early Compliance

Early adoption of GPAI Code of Practice requirements can provide significant competitive advantages. Organizations demonstrating proactive compliance may benefit from enhanced stakeholder trust, improved market access, and reduced regulatory uncertainty.

 

Broader Implications for the GPAI Ecosystem

 

Impact on AI Development and Innovation

The GPAI Code of Practice will fundamentally reshape AI development practices globally. Beyond direct compliance requirements, the framework establishes new standards for responsible AI development that influence industry best practices worldwide.

The Code represents the first comprehensive regulatory framework specifically addressing general-purpose AI models. This regulatory precedent will likely influence similar frameworks in other jurisdictions, creating a global trend toward systematic AI governance.

The implementation of the GPAI Code will provide valuable insights for future AI governance initiatives. As stakeholders gain experience with the EU AI Act's new requirements, its implementation continues to evolve.

 

Frequently Asked Questions

 

What is a GPAI model?

A General-Purpose AI (GPAI) model is an AI system trained on broad datasets that can perform various tasks across multiple domains. Some advanced models have high computational requirements and significant capabilities that may pose systemic risks.

 

What are the rules of the EU AI Act regarding GPAI?

The EU AI Act establishes specific obligations for GPAI models, including those exceeding certain computational thresholds. The obligations are mostly around transparency, copyright law, and safety and security for those considered to pose systemic risks. The fulfillment of these obligations have to also be well documented.

 

Why is the GPAI Code of Practice important?

The Code provides guidelines for complying with Chapter V of the EU AI Act (obligations for providers of GPAI models). Overall, these ensure thatp GPAI models are developed and deployed safely, transparently, and in compliance with EU regulations while addressing potential systemic risks to society.

 

How is the Code of Practice being drawn up?

The European AI Office developed the Code through extensive stakeholder consultation, involving industry experts, researchers, and civil society representatives to ensure comprehensive coverage of risks and practical applicability.

 

Who is this relevant for?

The Code applies to GPAI model providers operating in the EU, particularly those developing models that exceed specific computational thresholds or demonstrate significant capabilities that could pose systemic risks.

 

What are the key compliance deadlines?

Organizations must begin implementing GPAI Code practical guidelines immediately, given that the EU AI Act provisions on GPAI take effect in August 2nd 2025.

 

Start Your AI Compliance Journey with Nemko Digital

The GPAI Code of Practice represents a transformative moment in AI governance, requiring organizations to fundamentally rethink their approach to AI development and deployment. Nemko Digital helps organizations navigate these complex requirements through comprehensive AI regulatory compliance services and expert guidance.

Our proven frameworks enable organizations to achieve compliance while maintaining competitive advantage and innovation capacity. From initial risk assessments to ongoing compliance monitoring, we provide the expertise and tools necessary for successful GPAI Code implementation.

 

Free Expert Session: Turn GPAI Compliance Into Competitive Advantage

Learn exactly what the new EU AI Act GPAI rules mean for your organization and how to prepare effectively. Join our upcoming webinar with Monica Fernandez. Register now and access FREE Expert Insights.

Ready to ensure your AI systems meet EU requirements? Contact our AI compliance experts today to develop a customized implementation strategy that aligns with your business objectives while meeting all regulatory obligations. Visit our AI governance services to learn how we can support your compliance journey.

 

For the latest updates on AI regulatory developments and compliance strategies, explore our comprehensive resources at Nemko Digital Insights and stay ahead of the evolving regulatory landscape.