Meta Declines to Endorse EU’s AI Regulatory Framework

In a significant move, Meta has chosen not to endorse the European Union’s recently introduced regulatory framework for artificial intelligence, just weeks before the new rules for general-purpose AI providers are set to take effect. This decision has sparked discussions about the future of AI governance in Europe.

Joel Kaplan, Meta’s chief global affairs officer, expressed his concerns in a recent LinkedIn post, stating, “Europe is heading down the wrong path on AI.” He elaborated that after a thorough review of the European Commission’s Code of Practice for general-purpose AI models, Meta has decided against signing it. Kaplan highlighted that the Code introduces various legal ambiguities for developers and imposes requirements that exceed the intended scope of the AI Act.

The EU’s code of practice, which was published earlier this month, is designed as a voluntary framework to assist companies in aligning their operations with the EU’s AI regulations. Key provisions of the code include mandates for companies to maintain and regularly update documentation regarding their AI tools and services. Additionally, it prohibits developers from training AI systems on pirated content and requires compliance with content owners’ requests regarding the use of their works in datasets.

Kaplan criticized the EU’s approach, labeling it as an “overreach” that could hinder the growth and implementation of advanced AI models within Europe. He warned that such regulations could stifle innovation and deter European companies from leveraging AI technologies to build their businesses.

The AI Act, which is based on a risk-based regulatory framework, outright bans certain high-risk applications of AI, including manipulative behavioral techniques and social scoring. It also categorizes various uses of AI as “high-risk,” particularly in sensitive areas such as biometrics, facial recognition, education, and employment. Developers are required to register their AI systems and adhere to specific risk and quality management standards.

Numerous tech companies globally, including industry leaders, have been vocal in their opposition to these regulations, urging the European Commission to postpone their implementation. However, the Commission has remained steadfast in its commitment to the established timeline for rolling out these rules.

On the same day, the EU released guidelines for AI model providers in anticipation of the upcoming regulations, which will come into effect on August 2. These guidelines will impact companies offering general-purpose AI models that pose systemic risks, such as those developed by leading AI firms. Companies with such models available before the deadline will need to comply with the new legislation by August 2, 2027.

Tech Event Announcement

San Francisco | October 27-29, 2025

Leave a Comment