Skip to content
Nemko DigitalJanuary 8, 202515 min read

A Pivotal Year for AI Governance and the Road Ahead

2024 has been a landmark year in AI governance, with the EU introducing the AI Act, the General Product Safety Regulation (GPSR), and the updated Product Liability Directive. The UK hosted the first Global AI Safety Summit, setting the stage for international collaboration, while the U.S. signaled potential shifts in its AI policy following the presidential election. Globally, multilateral forums like the G7 and G20 worked to align standards, and the EU unveiled plans for AI Factories to enhance its technological capacity.

In this article, we will review the most significant regulatory changes and notable developments in AI governance throughout 2024 and look forward into 2025. Next year is set to bring more changes, with critical dates and obligations already scheduled to take effect in the EU well within Q1. To help companies stay prepared, we’ve outlined the key deadlines and developments that demand attention in the months ahead.

Looking Back at AI Governance in 2024

The year 2024 marked a turning point in Europe’s evolving framework for AI governance and digital product regulation. It brought into effect multiple legislative instruments, each aiming to enhance safety, accountability, and consumer protection within the rapidly expanding digital ecosystem. From the landmark AI Act to new product safety and liability regimes, and from ongoing considerations around AI liability to strengthened cybersecurity measures, 2024 laid the groundwork for a more harmonized and robust AI governance landscape. Below, we examine the key developments and their implications.

While Europe forged ahead with new regulatory frameworks for AI in 2024, the US adopted a more industry-led approach, emphasizing voluntary guidelines and ethical frameworks. This divergence in approaches highlights the growing global debate on how to balance innovation with the potential risks of AI, setting the stage for potential future collaborations or conflicts in the international digital market.

The AI Act: Setting the Stage for a New Era of AI Regulation

On 12 July 2024, the EU’s AI Act was published in the Official Journal of the European Union, signifying a major milestone in the region’s efforts to comprehensively regulate AI systems. It formally entered into force on 1 August 2024, beginning a phased rollout of its requirements. By 2 November 2024, Member States were mandated to publicly designate the authorities and bodies responsible for fundamental rights protection and to notify the European Commission and other Member States. Although much of the Act’s direct impact will materialize over time, 2024’s actions established the foundational governance architecture for the coming years.

For companies using or embedding AI in their products, the main focus throughout 2024 was proactive preparedness. As it became clear that the AI Act’s phased requirements would soon demand a higher level of transparency, compliance, and oversight, businesses concentrated on strengthening their internal governance structures, risk management protocols, and data protection measures. Whether through dedicating more resources to responsible AI research, training staff on compliance readiness, or conducting robust impact assessments early, the emphasis was on future-proofing strategies to ensure that new regulations, once fully operational, would be integrated smoothly and effectively into product development lifecycles.

The General Product Safety Regulation (GPSR): A Modernized Consumer Protection Framework

The EU’s General Product Safety Regulation (GPSR) came into effect on 13 December 2024, replacing the decades-old General Product Safety Directive. Unlike its predecessor, the GPSR is directly applicable across all EU/EEA member states, ensuring consistency in product safety standards. It introduced significant new obligations for economic operators, from risk assessments and technical documentation to enhanced traceability, prompt consumer notifications, and robust recall processes. Online marketplaces must now provide clear product information, maintain a single point of contact for authorities and consumers, and register with the Safety Gate Portal. For all distance sellers, comprehensive product information must be visibly presented. Critically, the GPSR accounts for digital elements, requiring that manufacturers monitor software updates and cybersecurity risks—further aligning traditional safety considerations with the digital realities of connected and AI-enabled products.

For companies integrating AI into their offerings, the focus in 2024 was on early compliance planning and proactive risk management. Businesses recognized that product safety would no longer be limited to physical attributes. Instead, software quality, cybersecurity resilience, and post-market monitoring of connected devices became essential components of compliance strategies. This shift led firms to invest in internal processes that continually assess digital vulnerabilities, enhance transparency in product labeling, and anticipate future compliance hurdles. Companies that began building capabilities for continuous digital safety assessments and forging closer relationships with suppliers, regulators, and third-party verification bodies are now better positioned to meet emerging requirements without disrupting their product innovation cycles.

The Product Liability Directive (PLD): Modernizing Liability in the Digital Age

Another cornerstone change in 2024 was the adoption of the new Product Liability Directive 2024/2853 on 23 October 2024. Set to replace its nearly 40-year-old predecessor, this directive adapts the EU’s strict liability regime to modern product scenarios, including software and AI-driven systems. It clarifies that product liability does not require proving negligence—only that a defective product caused damage. Significantly, the new PLD expands the concept of “product” to include software and AI systems, removes deductibles and liability ceilings, and extends potential liability to additional actors like fulfillment service providers and certain distributors. It also eases the burden of proof for claimants, presuming defectiveness in complex, safety-challenged technologies and empowering courts to compel disclosure of critical evidence. Extended limitation periods for latent damage (such as extending the “long-stop” from 10 to 25 years) and closer alignment with regulatory safety interventions mean more robust protections for consumers navigating the digital marketplace.

The AI Liability Directive (AILD): A Work in Progress

While the AI Act and the new PLD are now on the books, 2024 also saw ongoing discussions around the proposed AI Liability Directive (AILD). Initially proposed by the European Commission in 2022, the AILD aims to supplement the existing frameworks by adapting non-contractual civil liability rules to AI-related harm. The 2024 European Parliamentary Research Service’s impact assessment called for further refinements, including widening the scope beyond high-risk AI to "high-impact" systems and even rethinking the directive as a “software liability” instrument. It recommended possibly introducing strict liability for certain categories of AI systems, bringing general-purpose AI under the AILD’s ambit, and transforming the directive into a regulation to ensure EU-wide uniformity. Although the fate of the AILD remains uncertain, it is clear that EU policymakers recognize the need for further, perhaps more sweeping, rules to ensure responsible AI deployment and accessible redress mechanisms for harmed parties.

Cybersecurity Measures: Reinforcing Digital Defenses

Cybersecurity also received heightened attention in 2024. The adoption of the Cyber Solidarity Act on 2 December 2024, along with the Cyber Resilience Act’s approval on 10 October 2024, underscored the EU’s commitment to strengthening detection, preparedness, and responses to cyber threats. The measures require digital products, including those embedded with AI, to meet clear cybersecurity standards before entering the market. Additionally, the EU continued to take a firm stance against malicious cyber activities, imposing sanctions on individuals and entities linked 

to critical infrastructure attacks. This blend of proactive legislation and reactive enforcement signals that cybersecurity is integral to the broader AI governance agenda.

For businesses incorporating AI into their offerings, the primary focus in 2024 was on developing and maintaining robust cybersecurity compliance strategies. Companies anticipated a regulatory environment that increasingly treats cybersecurity as integral to lawful market access, placing particular emphasis on risk assessments, secure software lifecycle management, and the integration of incident response protocols. By investing early in these capabilities and engaging with cybersecurity guidance—whether through in-house compliance teams, external counsel, or industry consortia—firms positioned themselves to better navigate the emerging regulatory standards and to bolster trust in their AI-driven products and services.

United Kingdom

In 2024, the United Kingdom sought to position itself at the forefront of global AI governance discussions, most notably by hosting the inaugural Global AI Safety Summit at Bletchley Park. This high-profile gathering convened government officials, industry leaders, and academic experts to forge consensus on emerging risks and best practices for AI use. Although the resulting principles were non-binding, they provided valuable guidance for companies seeking to understand the UK’s approach: a system grounded in international cooperation, risk-based regulation, and ethical oversight. Coupled with ongoing government work on pro-innovation frameworks and engagement with existing regulators, the UK’s approach encourages businesses to adapt their compliance and governance strategies in anticipation of evolving standards.

United States

In the United States, 2024 saw the federal government translate its vision of responsible AI into more concrete directives. Building on previously stated objectives, the administration’s executive order issued in late 2023 set the stage for 2024’s wave of federal agency guidance—particularly around transparent and safe AI development. The National Institute of Standards and Technology continued to refine its AI Risk Management Framework, offering companies actionable criteria for risk assessment and mitigation. Meanwhile, regulatory bodies such as the Federal Trade Commission signaled an increased willingness to enforce existing laws against unfair or deceptive AI practices. For U.S. businesses, aligning product pipelines and compliance programs with these evolving expectations became essential, as demonstrating adherence to best practices and readiness for regulatory scrutiny moved from aspiration to operational necessity.

Global Efforts

Beyond national borders, 2024 underscored a growing alignment among international bodies and multilateral forums on the core principles of AI governance. The G7 and G20 intensified their

discussions, emphasizing transparency, fairness, and accountability, while the OECD updated its own guidelines to address concerns about data privacy, bias, and safety in generative AI systems. Standard-setting organizations such as the International Organization for Standardization advanced technical benchmarks that companies can use to verify their internal controls and product integrity. Although many of these efforts remain voluntary or aspirational, their collective weight shaped a more predictable global environment. For businesses operating across jurisdictions, proactively engaging with these evolving norms—by integrating internationally recognized standards and participating in cross-border dialogues—became a strategic imperative to ensure long-term regulatory resilience and market trust.

Looking ahead into 2025, what to prepare for in AI governance

 

The EU

February 2, 2025: The EU AI Act's prohibitions on "unacceptable risk" AI systems become enforceable. This includes bans on AI applications such as social scoring and certain types of biometric categorization. Additionally, obligations related to AI literacy and public awareness initiatives commence, emphasizing the importance of understanding AI's benefits, risks, and associated rights and obligations.

August 2, 2025: Governance rules and obligations for providers of general-purpose AI (GPAI) models come into effect. By this date, the European AI Office is expected to have published Codes of Practice relating to GPAI models. If these codes are not finalized or deemed inadequate, the European Commission may establish common rules for GPAI providers via implementing acts. Furthermore, Member States are required to have defined and communicated their penalty frameworks for non-compliance with the AI Act to the European Commission, ensuring that enforcement mechanisms are in place.

Thought leadership in AI in Europe

In December 2024, EuroHPC JU selected seven consortia to establish the first AI Factories in Europe. These facilities will be located in Finland, Germany, Greece, Italy, Luxembourg, Spain, and Sweden. The project represents a collaborative effort involving 15 EU member states and two EuroHPC participating states, with a total investment of approximately €1.5 billion, combining national and EU funding. The deployment of these AI-optimized supercomputers is scheduled for 2025-2026, with the goal of more than doubling Europe's current computing capacity.

Each AI Factory will focus on specific sectors, including healthcare, energy, climate, and finance, fostering innovation and collaboration across Europe. For instance, the AI Factories in Spain and Finland will include experimental platforms to develop and test innovative AI models and applications. These facilities are expected to democratize access to advanced AI resources, supporting startups, small and medium-sized enterprises (SMEs), and the broader research community.

In France, President Emmanuel Macron has expressed a strong commitment to making the country a leader in AI. Plans are underway to position Paris as a hub for AI development, with an international summit scheduled for 2025. The French government aims to increase the number of individuals trained in AI, establish data centers, and boost semiconductor production, highlighting the strategic importance of AI in national economic growth and transformation.

United Kingdom

The UK government has announced plans to introduce legislation aimed at mitigating AI-related risks. Technology Secretary Peter Kyle confirmed that forthcoming laws would transform current voluntary AI testing agreements into legally binding codes. This legislative framework will also establish the UK's AI Safety Institute as an independent government body dedicated to safeguarding citizens against AI-induced threats. The primary focus will be on advanced AI models, such as generative AI systems, to ensure their safe deployment.

Regulatory Oversight: The Competition and Markets Authority (CMA) is set to receive enhanced powers to oversee AI firms. Under the leadership of Sarah Cardell, the CMA has been proactive in addressing the rapid advancements in AI technology. Anticipating shifts in the AI market, the CMA established a unit in 2019 staffed with data scientists and technologists. By 2023, they were prepared to tackle the evolving AI landscape, launching a review of foundation models. This review highlighted over 90 partnerships among major tech firms and identified risks to fair competition. The CMA also outlined six principles addressing issues from input access to AI accountability. The AI sector has responded positively to their initiatives. Additionally, the CMA has been actively using merger control powers to investigate various tech deals, such as Microsoft's partnership with OpenAI and Amazon's collaboration with Anthropic. New legislation expected in 2025 will further empower the CMA to set codes of conduct for AI firms, with penalties for non-compliance. Cardell emphasizes the importance of early intervention to promote competition and protect consumers, foreseeing the potential for rapid changes in the AI market.

Creative Industry Protections: The UK government is considering introducing a "right to personality" to protect artists and celebrities from AI companies creating products that mimic their unique features. A consultation will be launched to update copyright rules, focusing on how tech companies use content to train AI models. Proposed measures include a rights reservation mechanism to offer legal clarity and ensure that artists' content can be licensed or protected against unauthorized use. The consultation also aims to enhance transparency around the use of scraped materials by AI companies. The move has generated concern within the creative industries, which fear that AI companies might exploit their work without proper consent. The government seeks to balance interests between fostering AI innovation and protecting the UK's £125bn creative sector. The outcome of this consultation will be crucial in setting future legislation, with debates expected to be intense and multifaceted.

United States

In 2025, the United States is expected to experience significant shifts in AI governance, influenced by the recent election of President Donald Trump. The incoming administration has indicated plans to reassess existing AI policies, introducing a degree of uncertainty for businesses and policymakers.

Policy Reversals and New Initiatives: President-elect Trump has expressed intentions to rescind Executive Order 14110, signed by President Joe Biden in October 2023, which established comprehensive guidelines for the safe and trustworthy development of AI. The potential revocation of this order could lead to a regulatory vacuum, leaving companies uncertain about compliance requirements and ethical standards. Additionally, the Trump administration has signaled a focus on AI development to counter international competitors, particularly China, advocating for initiatives akin to the "Manhattan Project" to achieve dominance in artificial general intelligence (AGI).

Regulatory Uncertainty: The anticipated policy reversals may create an environment of regulatory ambiguity. Businesses could face challenges in aligning their AI strategies with evolving federal guidelines, especially if existing frameworks are dismantled without immediate replacements. This uncertainty may affect investment decisions, innovation trajectories, and compliance efforts, as companies strive to navigate the shifting landscape.

International Considerations: The U.S. approach to AI governance in 2025 will also be shaped by geopolitical dynamics, particularly the technological rivalry with China. The Trump administration's emphasis on outpacing China in AI capabilities suggests potential increases in government funding for AI research and development. However, this competitive stance may also lead to heightened scrutiny of international collaborations and stricter export controls, impacting global AI partnerships and supply chains.

Given these anticipated developments, companies operating in the AI sector should closely monitor policy announcements from the new administration. Engaging with industry associations and legal experts will be crucial to adapt to the evolving regulatory environment and to ensure that AI initiatives remain compliant and strategically aligned with national priorities.

Conclusion and key take aways

The year 2024 brought significant advancements in AI governance, with far-reaching implications for businesses worldwide. The EU led the charge with landmark legislation such as the AI Act, the General Product Safety Regulation, and the updated Product Liability Directive, creating a comprehensive framework for AI compliance. In the UK, the Global AI Safety Summit highlighted the nation’s leadership in fostering international collaboration on AI safety, while laying the groundwork for stronger regulatory oversight in the years ahead. Meanwhile, the United States entered a period of policy uncertainty, with potential shifts in governance under the new administration adding complexity to the global regulatory landscape.

Key Takeaways

1. EU Compliance Deadlines: Businesses targeting the EU market should prepare for upcoming AI Act requirements in 2025, including prohibi

systems and governance rules for general-purpose AI. Early planning for these obligations is essential to avoid significant penalties.

2. Proactive Risk Management: Companies integrating AI should prioritize cybersecurity and software resilience to align with the GPSR and related liability directives. Robust risk assessments, traceability, and post-market monitoring are now critical components of compliance.

3. Global Harmonization Efforts: International initiatives, including the G7 and G20’s work on AI standards, emphasize the importance of aligning corporate policies with globally recognized principles for transparency, fairness, and accountability.

4. Regulatory Uncertainty in the US: U.S. businesses should remain agile, monitoring potential policy reversals and shifts in AI governance while preparing for heightened scrutiny in areas like generative AI and data security.

 

avatar

Nemko Digital

Nemko Digital is formed by a team of experts dedicated to guiding businesses through the complexities of AI governance, risk, and compliance. With extensive experience in capacity building, strategic advisory, and comprehensive assessments, we help our clients navigate regulations and build trust in their AI solutions. Backed by Nemko Group’s 90+ years of technological expertise, our team is committed to providing you with the latest insights to nurture your knowledge and ensure your success.

Fundamentals of AI and AI Policy

This foundational block provides essential principles and practices crucial for understanding the basics of AI, AI governance, ethics, regulations, and standards. How does AI work, and how will it be regulated?
2 (1)

RELATED ARTICLES