
Regulatory Scrutiny Heats Up in Europe
The Irish Data Protection Commission has launched a formal investigation into X Internet Unlimited Company (XIUC), the platform's newly renamed Irish entity, over its alleged use of public posts from European Union users to train its Grok AI chatbot.
This probe marks another step in the EU's push to hold AI vendors accountable for their practices, particularly with regards to user consent. Many leading AI companies have adopted a "build first, ask later" strategy, deploying models before addressing regulatory compliance.
The investigation is centered around X's practice of sharing publicly available user data – such as posts, profiles, and interactions – with its affiliate xAI, which uses the content to train the Grok chatbot. This has raised concerns from regulators and privacy advocates, given the lack of explicit user consent.
Setting a Precedent for AI Regulation
The probe into X's use of personal data could set a precedent for how companies use publicly available data under the bloc's privacy laws. This development comes as rival Meta announced plans to use public posts, comments, and user interactions with its AI tools to train models in the EU.
According to Hyoun Park, CEO and chief analyst at Amalgam Insights, "the EU does not look kindly to the approach of opting users into sharing data by default." The GDPR has been established since 2018, with annual fines consistently totaling over €1 billion. This sets a high bar for companies operating within the EU.
Impact on Enterprise Adoption
The probe is likely to impact enterprise adoption of AI models further trained on publicly available personal data, as businesses weigh legal and reputational risks. Companies are now scrutinizing AI model lineage before approving deployment, with 82% of technology leaders in the EU doing so, according to Greyhound Research.
One case cited by Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, involved a Nordic bank that paused a generative AI pilot due to concerns over the source of the model's training data. Compliance issues took precedence over product leads, resulting in the program being restructured around a Europe-based model with fully disclosed inputs.
Global Implications
The investigation into X could shape how regulators worldwide rethink consent in the age of AI. This probe might do for AI what Schrems II did for data transfers: set the tone for global scrutiny, according to Gogia.
Park suggested that enterprise customers should seek indemnity clauses from AI vendors to protect against data compliance risks. These clauses hold vendors accountable for regulatory compliance, governance, and intellectual property issues linked to the AI models they provide.
A New Era in AI Regulation
The narrative is shifting from enforcement to example-setting, with regions outside the EU – such as Germany, the Netherlands, Singapore, and Canada – likely to follow suit. This development signifies a new era in AI regulation, where companies will need to prioritize consent and data compliance.