Skip to content
General Product Safety Regulation (GPSR)
Nemko DigitalFebruary 4, 20255 min read

General Product Safety Regulation (GPSR)

In 2023, the EU introduced a new legislative framework for product safety titled the General Product Safety Regulation. In this article, we provide an overview of the GPSR’s history, its biggest changes from previous legislation, and how AI systems fit into the framework of this updated regulation. 

What is the GPSR, and what are its implications for AI? 

The General Product Safety Regulation (GPSR) represents a significant shift in how product safety is regulated within the European Union. The GPSR carries notable implications for the development and deployment of AI products. A thorough knowledge of the GPSR’s requirements, as well as compliance with the soon-following EU AI Act, can ensure that organizations can meet contemporary product safety expectations for their AI products.

The evolution from the GPSD (2001) to GPSR (2023)  

The GPSR, introduced in 2023, replaces the General Product Safety Directive (GPSD) of 2001. The GPSD was the foundation of product safety rules within the EU for over 20 years. It focused on ensuring consumer goods were safe before reaching the market. However, as consumer technology advanced, the need for an updated framework became evident—particularly to account for new and emerging risks related to digital products, artificial intelligence, and the broader complexity of connected devices. 

The new GPSR aims to address these gaps by emphasizing a proactive approach to product safety that accounts for the complexities of algorithmically mediated technologies. It provides clearer obligations for manufacturers, importers, and distributors while enhancing market surveillance and safety enforcement by authorities. This means AI products, like any other consumer product, must be demonstrably safe and be easily retrievable from the market in case of any safety concerns. 

Major updates in the GPSR and their impact on AI products

The GPSR includes several key updates compared to the previous directive: 

Increased scope for digital and AI-enabled products. The GPSR explicitly considers AI and digital components in its scope of product safety. With this updated definition of products, AI technologies that interact with consumers now need to meet safety standards like any physical product.
Enhanced manufacturer responsibilities. Manufacturers have now been set evaluative standards for how to ensure ongoing product safety through the AI product lifecycle. This includes not only pre-market evaluations of risk, but also continuous post-market monitoring and the capacity to initiate recalls if unforeseen risks are detected.
Market surveillance and traceability. The GPSR emphasizes enhanced product traceability and requires businesses to maintain data that supports effective recalls. AI systems must therefore have sufficient traceability to allow authorities to understand the functionality and decision-making pathways that might impact consumer safety.

For AI products, these changes mean organizations need to design their systems with market withdrawal capabilities in mind. If an AI product demonstrates unsafe behavior after release, it must be possible to recall it or disable it easily, placing an effective limit on the potential harm to consumers. 

The relationship between the GPSR and the AI Act

The EU AI Act of 2024, currently nearing adoption, focuses specifically on regulating AI systems based on their risk levels. If an AI system is classified as high-risk—for example, those used in medical devices, critical infrastructure, or law enforcement—it falls under stringent requirements to ensure safety, transparency, and accountability. Importantly, compliance with the AI Act also inherently secures compliance with the GPSR for those high-risk products. 

For AI products that are not considered high-risk, the GPSR remains the primary regulatory framework ensuring their safety. As a result, companies need to be diligent in understanding whether their AI system is categorized as high-risk and how that status affects their compliance obligations. 

The GPSR requires that products must be safe throughout their lifecycle. Organizations must prepare safety protocol for any system updates and changes. This aligns with the AI Act's focus on transparency, record-keeping, and post-market monitoring. High-risk AI products that meet AI Act requirements are likely also compliant with GPSR demands, particularly regarding traceability and safety monitoring. 

Preparing AI products for GPSR compliance 

Companies that are developing AI products should prepare for compliance requirements by mapping out expectations of both the GPSR and the AI Act during the development phase and onwards. For non-high-risk AI, the focus will be on ensuring the general safety of the product. Organizations developing low-risk systems should consider the potential risks arising from faulty algorithms, biased decision-making, or unintended behaviors. 

A key practical step is implementing effective post-market monitoring. AI products should include mechanisms that track performance and identify risks once deployed. This proactive approach is vital for meeting GPSR obligations, as it ensures that AI systems can be quickly adapted or withdrawn from the market if they pose safety concerns. 

Another critical consideration is product traceability. AI products must be designed with transparency in mind to allow authorities to understand both the data and the processes behind their decision-making. This facilitates higher efficiency for both safety assessments and market recalls when necessary. 

GPSR requirements on AI product withdrawal 

The GPSR mandates that all products, including AI-enabled ones, must have clear procedures for recall or market withdrawal if safety issues are discovered. Specifically, manufacturers must provide authorities with detailed product documentation, including risk analyses and descriptions of how the AI product has been designed and tested for safety. 

This documentative aspect of the GPSR is crucial to promote traceability of AI products. It reinforces the need for accessible built-in mechanisms that allow an AI system to be pulled from the market effectively. For AI systems embedded in consumer devices, this could mean having features that disable core functionalities if the product is deemed unsafe. 

Conclusion 

The General Product Safety Regulation (GPSR) introduces a comprehensive safety framework for AI products, whether high-risk or not. For AI systems deemed high-risk, compliance with the AI Act will effectively align with the GPSR's safety requirements. For other AI products, the GPSR is the primary framework that organizations must look to as a standard for low-risk product safety. To ensure compliance with the GPSR, companies must employ robust safety practices, effective monitoring, and the ability to recall products from the market. 

Companies working on AI products should start preparing for the effects of the GPSR as soon as possible. A compliance plan starts through the design and utilization of risk assessments, as well as the implementation of more general lifecycle management practices within the organization. Navigating this regulatory landscape with preparedness will not only help in complying with EU standards. It can create a proactive reputation for your AI technology and, moreover, reinforce consumer trust in an AI-augmented future. 

avatar

Nemko Digital

Nemko Digital is formed by a team of experts dedicated to guiding businesses through the complexities of AI governance, risk, and compliance. With extensive experience in capacity building, strategic advisory, and comprehensive assessments, we help our clients navigate regulations and build trust in their AI solutions. Backed by Nemko Group’s 90+ years of technological expertise, our team is committed to providing you with the latest insights to nurture your knowledge and ensure your success.

Fundamentals of AI and AI Policy

This foundational block provides essential principles and practices crucial for understanding the basics of AI, AI governance, ethics, regulations, and standards. How does AI work, and how will it be regulated?
2 (1)

RELATED ARTICLES