As a game-changing technology of the twenty-first century, artificial intelligence (AI) is becoming more and more integrated into business and consumer goods. Innovation and commercial expansion are accelerated by this, but new risk categories are also introduced, which authorities are now addressing with updated safety regulations. To reflect this reality, the European Union updated product safety legislation in December 2024 with the General Product Safety Regulation (Regulation (EU) 2023/988, GPSR).
GPSR modernizes product safety law for the digital age. In addition to replacing the General Product Safety Directive of 2001, the GPSR broadens the concept of "safety" to include cybersecurity, mental health, and social well-being in addition to traditional physical concerns. For companies that create, produce, or market AI-enabled goods, this rule represents a dramatic change in responsibilities and demands. The risks are not abstract: bias could mean a smart camera misidentifying a customer; performance drift might cause predictive maintenance tools to miss equipment failures; automation bias could lead workers to blindly follow AI recommendations; cybersecurity flaws may expose devices to hacking; and poorly designed alerts could create unnecessary stress for users. By making these risks part of the safety definition, the GPSR ensures businesses treat AI safety with the same seriousness as fire, electrical, or mechanical hazards.
The new regulatory landscape for AI-enabled products is defined by the EU AI Act and the GPSR. The GPSR guarantees that AI does not jeopardize the safety of consumer products, while the AI Act concentrates on the reliability and implications for fundamental rights of AI systems. This means that for enterprises, compliance now involves integrating AI safety across the product's lifespan rather than merely checking boxes. Early adopters will not only lower their financial and legal risks but also gain the confidence of authorities and consumers.
Electric shocks, fire dangers, and mechanical failures were among the physical injuries that were traditionally prevented by product safety regulations. However, connected and AI-enabled devices present new risk profiles, which the GPSR specifically includes in its purview, following are the important risks:
The GPSR sets out a series of concrete duties that businesses must integrate into both their product design and day-to-day operations. These obligations are not abstract principles but actionable requirements that determine whether a product can remain on the EU market. For companies working with AI-enabled or connected devices, the following areas are especially critical. The regulation imposes a number of requirements that have an immediate impact on companies that sell AI-enabled goods in the EU:
1. Lifecycles safety (article 5): Products must continue to be safe following upgrades, retraining, or additional data inputs in addition to their initial launch. The old "once safe, always safe" methods are no longer relevant.
2. Ongoing risk evaluation (articles 6 and 9): Predictable misuse, bias, drift, automation bias, and effects on social or mental health must all be included in risk analysis. Additionally, this documentation needs to be kept for ten years.
3. Post-market safety gate and monitoring (articles 19–22): Post-market surveillance (PMS) systems are required of manufacturers in order to identify hazards, report mishaps, and take part in the EU Safety Gate rapid alert system.
4. Joint responsibility for changes (article 13): A significant modification, such as retraining an AI model, releasing a significant software update, or integrating third-party IoT devices, can legally make someone the new "manufacturer," taking full responsibility for ensuring the product still complies with the GPSR.
In practice, this means that every update, whether a patch, an AI model retraining, or a major software release can trigger new safety obligations and even shift liability back to the manufacturer. Compliance therefore cannot be an afterthought; it needs to be integrated into the engineering pipeline so that safety checks, testing, and documentation happen as part of the release process.
5. Openness and user data (article 8, recital 31): Clear and understandable information about AI's capabilities, limitations, false alarm rates, and data gathering procedures must be provided to users.
6. Accountability of the supply chain (articles 7–12, 10): Manufacturers, importers, distributors, franchisees, and even internet platforms are all subject to obligations. Every actor is responsible for maintaining component and responsibility traceability and ensuring compliance.
The GPSR does not stand alone, it works alongside other major EU frameworks like the AI Act, the Cyber Resilience Act (CRA), and the revised Product Liability Directive (PLD). Together, these regulations create a comprehensive safety and accountability net for AI-enabled products.
| Obligation | Article | Core Requirement | What It Means for AI Products | 
|---|---|---|---|
| Lifecycle Safety | Art. 5 | Products must remain safe after updates, retraining, or data changes. | Each AI update triggers renewed safety verification. | 
| Ongoing Risk Evaluation | Arts. 6 & 9 | Continuous assessment of bias, drift, misuse, and mental-health effects. | Maintain risk logs and keep records for 10 years. | 
| Post-Market Monitoring | Arts. 19–22 | Set up systems to detect, report, and act on incidents. | Integrate PMS and Safety Gate reporting into QA. | 
| Shared Responsibility | Art. 13 | Major modifications can transfer "manufacturer" liability. | Retraining or IoT integration may shift compliance duties. | 
| Transparency to Users | Art. 8, Rec. 31 | Provide clear info on AI functions, false-alarm rates, and data use. | Update user manuals and digital notices. | 
| Supply-Chain Accountability | Arts. 7–12 | All economic operators share compliance duties. | Ensure traceability and updated contracts. | 
| Regulation | Core Focus | GPSR Connection | 
|---|---|---|
| GPSR (2023/988) | Consumer product safety (physical + AI risks) | Baseline safety net for all products, including AI-enabled. | 
| AI Act (2024/1689) | Trustworthiness & fundamental rights in AI | Governs how AI systems are designed & documented. | 
| Cyber Resilience Act (CRA) | Cybersecurity of connected products | GPSR requires cybersecurity as part of safety. CRA sets detailed obligations. | 
| Product Liability Directive (PLD, revised 2022) | Civil liability for defective AI/tech products | GPSR non-compliance increases exposure under PLD. | 
In essence, the GPSR turns AI safety into a continuous compliance discipline rather than a one-time certification exercise. To remain in the EU market, companies must prove that their AI-enabled products are safe across their entire lifecycle, with traceable documentation and active monitoring. For executives, this means compliance is a market-access prerequisite, not an afterthought. AI safety must be engineered into every update and retraining cycle. To do so, supply-chain coordination and transparent communication are essential to maintaining regulatory confidence and consumer trust. Early adopters who build these mechanisms now will not only reduce legal exposure but also gain a strategic advantage, positioning themselves as reliable, future-ready innovators under the EU's evolving product safety framework.
GPSR compliance has far-reaching implications that go well beyond legal formalities. Companies that fall short face financial, operational, and reputational consequences that can directly affect their market access and long-term competitiveness. Authorities may impose fines of up to 4% of global annual turnover (article 44), restrict sales, or order recalls and market withdrawals (articles 32–36). These enforcement actions are made public through the EU Safety Gate portal, where incidents are instantly visible to regulators, competitors, and consumers, leading to immediate loss of trust and potential investor concern. Because of the GPSR, AI safety has evolved from a technical consideration to a strategic commercial issue. Without demonstrating conformity, a product cannot obtain the CE-mark, and without the CE-mark, it cannot be legally sold in the EU. For executives, this makes compliance not only a regulatory requirement but also a market-access condition. Consider, for example, smart toys withdrawn from EU shelves due to hidden microphones or unsafe data practices. A single non-compliance incident can trigger a sales ban, public exposure, and lasting reputational damage.
Seen through this lens, AI safety under the GPSR is a business survival issue. Companies that act early, by embedding risk assessment, documentation, and monitoring into their engineering cycles, protect both their regulatory standing and their brand integrity. Early movers also gain a trust advantage with consumers and authorities alike, demonstrating transparency and accountability in a rapidly evolving regulatory landscape.
The GPSR was adopted in May 2023 and entered into force on 13 December 2024, replacing the older 2001 Directive. From this point, all businesses placing products on the EU market including AI-enabled and connected devices must comply with its broadened safety obligations. Market surveillance under the GPSR has already started, and authorities are increasing inspections across 2025 (Figure2).
Figure 2 suggests companies should treat 2025 as the year of GPSR readiness: technical files, AI risk assessments, and post-market monitoring systems must be organized now to protect market access and avoid costly disruptions. Since enforcement and inspections are expected to intensify from 2025 onwards, companies must take a methodical and proactive approach to AI safety assessments to fulfil their GPSR responsibilities.
For example, technical documentation about AI models, training data, updates, and cybersecurity precautions must be kept for 10 years, so how to best keep track of such records shall be designed and implemented in the company's governance framework. Another example is that AI-specific risk evaluations need to be broadened beyond conventional physical threats, so designing how these evaluations will be implemented (for existing and future AI-embedded products) will be crucial for avoiding any unnecessary delays in market entry and procurement. Robust post-market monitoring systems are also necessary for compliance to record problems, track real-world performance, and adjust to emerging hazards. To guarantee accountability for changes, upgrades, and cybersecurity measures, businesses must revise their contracts with distributors and suppliers to clearly define roles throughout the supply chain. Lastly, openness with consumers is essential: companies need to give transparent, easy-to-understand instructions on how AI functions, its limitations, and data management. In addition to guaranteeing legal compliance, this methodical strategy increases customer confidence and the market's long-term viability.
Beyond compliance, companies that clearly communicate AI safety, can turn regulation into a competitive advantage. Customers are more likely to trust and stay loyal to brands that show commitment towards safety measures and avoid taking shortcuts.
Compliance with GPSR can look overwhelming on paper, but it doesn't have to be. The key is a structured, practical and fast-to-execute process. At Nemko Digital, we don't just explain compliance, we work side by side with your teams to get it organized and ready. Our GPSR AI Safety Assessment Framework gives businesses a clear, actionable path GPSR readiness. Below is a quick overview of how it works.
1. Audit existing documentation – Ensure your technical files capture AI models, updates, and cybersecurity safeguards (article 9).
2. Expand risk assessments – Cover AI-specific risks like bias, drift, automation bias, privacy, and mental health alongside physical safety (articles 6 & 9).
3. Set up post-market monitoring – Establish processes for incident logging, Safety Gate reporting, and continuous AI performance checks (articles 19–22).
4. Review supply chain contracts – Clarify responsibilities for updates, modifications, and compliance across all actors (articles 7–13).
5. Strengthen customer transparency – Update manuals, warnings, and client communication to explain AI limits and data handling (article 8, recital 31).
Conducted over a short engagement cycle, the assessment benchmarks your existing product documentation, risk evaluations, AI lifecycle controls, and governance mechanisms against the regulation's key provisions. The evaluation delivers a precise view of compliance readiness. The outcome is a concise, executive-level readiness report and compliance evaluation statement, forming a solid foundation for demonstrating conformity during market surveillance or CE-marking processes.
This is not theory; it's a step-by-step playbook we execute together. With our expertise, businesses that start early can transform GPSR compliance into a trust and market advantage, securing EU market access with confidence.
To make this manageable, and so that companies don't need to start from scratch, we offer ready-to-use GPSR safety assessment checklists and monitoring templates that can be plugged directly into workflows. These tools help teams systematically address lifecycle risks, document compliance, and maintain audit readiness without slowing down development.
Our assessment, and with tailored guidance, helps you establish clear governance roles: who owns and updates the technical documentation, who signs off major updates, and who maintains the ongoing risk log. We know that without clear accountability, compliance processes often fall through the cracks during product updates, so we help you avoid such risk.
Compliance may look complex, but it's also an opportunity: businesses that act early can turn GPSR readiness into a trust and market advantage. We support companies in building these frameworks step by step, ensuring stronger relationships with authorities and customers.
At Nemko Digital, we typically help clients through our structured GPSR AI Safety Assessment and Evaluation. This process provides leadership teams with a clear, evidence-based understanding of their organisation's position and gaps under the General Product Safety Regulation (EU 2023/988).
With the advent of the GPSR, AI safety will become just as important as physical safety. In the EU, companies who offer connected or AI-enabled devices need to understand that compliance is a continual requirement rather than a one-time event. If AI safety evaluations under GPSR are managed proactively, they can be more than just an exercise; they can be a competitive advantage that shows consumers that your goods are reliable, transparent, and safe in a connected world.
This is where Nemko Digital supports companies: translating abstract regulatory obligations into practical, technical, and governance steps. From audit-ready documentation templates to clear governance frameworks, we help businesses not only meet GPSR requirements but also turn compliance into a strategic advantage.