Skip to content
Nemko DigitalFeb 20, 2026 10:30:01 AM3 min read

EU Launches Investigation into X and Grok AI Under the Digital Services Act

 

In a significant move to enforce its landmark digital regulations, the European Commission has launched a formal investigation into X (formerly Twitter) to assess its compliance with the Digital Services Act. The probe, announced on January 26, 2026, centers on the risks associated with the platform’s generative AI tool, Grok, and its recommender systems. This development underscores the EU’s commitment to creating a safer digital environment and holding very large online platforms (VLOPs) accountable for managing systemic risks.

This investigation expands on formal proceedings initiated in December 2023, which already scrutinized X’s content moderation processes and transparency measures. The new inquiry specifically targets whether X has adequately assessed and mitigated the potential harms linked to Grok, such as the dissemination of illegal content and manipulated media. For organizations operating in the digital sphere—ranging from major internet platforms and social media platforms (including Facebook, YouTube, and TikTok) to online marketplaces and host providers—this action serves as a critical reminder of the evolving landscape of AI laws for businesses and the growing importance of robust AI governance and clear rules.

 

Understanding the Digital Services Act Probe

 

digital services act compliance

 

The Commission's investigation is rooted in concerns that X’s deployment of Grok may not align with its obligations under the Digital Services Act. The probe will focus on several key areas to determine whether the platform has failed to uphold its responsibilities in ensuring a transparent and secure online ecosystem. These obligations are not merely procedural; they are fundamental to protecting users from systemic risks, including gender-based violence and threats to mental well-being, as well as protecting minors and improving user comms around safety controls.

The core of the investigation revolves around the following obligations:

 

Investigation Focus Area Key DSA Obligation Potential Infringement
AI Risk Mitigation Diligently assess and mitigate systemic risks from AI functionalities. Failure to prevent the spread of illegal content and harmful material generated by Grok.
Risk Assessment Reporting Conduct and submit an ad hoc risk assessment report for new functionalities. Lack of a comprehensive report on Grok's impact on the platform's risk profile before deployment.
Recommender Systems Mitigate risks associated with algorithmic recommender systems. Insufficient measures to address risks from the switch to a Grok-based recommender system.

 

These proceedings highlight the EU’s proactive stance on regulating powerful tech platforms. The Digital Services Act empowers the Commission to take significant enforcement actions, including imposing substantial fines for non-compliance and requiring detailed transparency reports. This was previously demonstrated by the €120 million fine imposed in December 2025 against X for transparency breaches.

 

What This Means for AI Risk Assessment and Governance

The EU’s focus on X and Grok sends a clear message to all organizations deploying AI: a proactive and comprehensive AI risk assessment is non-negotiable. The investigation emphasizes that compliance extends beyond content moderation to include the very design and deployment of AI systems. Companies must be prepared to demonstrate that they have thoroughly evaluated potential harms and implemented effective mitigation strategies, a process that requires deep technical and regulatory expertise.

For businesses navigating this complex environment, the principles of the Digital Services Act are closely intertwined with other regulatory frameworks. Successfully navigating the EU AI Act, for example, also requires a structured approach to risk management and transparency. The Commission's actions signal a future where regulators will demand detailed evidence of due diligence, including the completion of fundamental rights impact assessments for high-risk AI systems.

As the digital landscape evolves, establishing a robust governance framework is essential for sustainable innovation. Organizations that embed ethical principles and rigorous compliance checks into their AI lifecycle will not only mitigate legal and financial risks but also build lasting digital trust with their users. Partnering with experts in AI regulatory compliance can provide the necessary guidance to turn these complex regulatory obligations into a competitive advantage, ensuring that technology serves both business objectives and societal values.

avatar
Nemko Digital
Nemko Digital is formed by a team of experts dedicated to guiding businesses through the complexities of AI governance, risk, and compliance. With extensive experience in capacity building, strategic advisory, and comprehensive assessments, we help our clients navigate regulations and build trust in their AI solutions. Backed by Nemko Group’s 90+ years of technological expertise, our team is committed to providing you with the latest insights to nurture your knowledge and ensure your success.

RELATED ARTICLES