The EU AI Act’s foundation model regulation faces criticism motivated by economic concerns around the future of foundation model development in Europe.
A deeper look at the EU AI ecosystem and resulting market dynamics reveals that such concerns might be highly misguided - a strong regulatory focus on foundation models would be highly economically beneficial.
Introduction
Approaches to AI regulation are torn between an upstream focus on the providers of powerful foundation models and a downstream focus on the practical deployment of these models’ capacities.
Recent discussion around the EU’s AI Act has seen the proposal of an article 28b stipulating extensive regulatory requirements for providers of foundation models. This proposal was met with economic concerns regarding the EU’s global position in the AI space.
We argue that these economic objections are misguided, and make two core claims: (a) that the EU AI ecosystem will most likely not feature globally competitive foundation model providers, but will continue to consist mostly of downstream deployers of AI; and that (b) foundation-model focused regulation leads to dramatically fewer regulatory burdens on AI EU players and enables a less constrained and more efficient market.
EU AI Regulation At a Crossroads
In April 2021, the EU Commission published its proposal for comprehensive regulation of artificial intelligence in Europe. This proposal, the AI Act, seeks to ensure safe and beneficial artificial intelligence in Europe by preventing harms from misuse and unreliability of AI systems while harnessing their economic and social potential. Following extensive negotiation, the EU trilogue is set to finalize the AI Act shortly.
Regulating Foundation Models
One of the few remaining controversies surrounding the AI Act concerns the regulation of so-called foundation models. Soon after the AI Act was originally suggested in 2021, public and political awareness of recent advances in AI research skyrocketed.
This specifically motivated a stronger focus on the cutting edge of AI capabilities, which is driven by foundation models - particularly powerful AI models with a wide range of applications. For instance, ChatGPT is based on OpenAI’s foundational language models GPT-3.5 and GPT-4.
Strong expert consensus warned of risks of misuse, or worse, of unreliability and loss of control, from foundation models. In consequence, the European Parliament suggested the addition of an article 28b to the AI Act, introducing requirements and liabilities for foundation models.
The specific details of 28b have since been the meandering subject of negotiations - the common element relevant for our point is that 28b envisions that some of the burden of ensuring safe and legal AI outputs is born by the providers of foundation models, and not only by the deployers ultimately bringing the AI to the customer.
For instance, models might have to go pre-publication screening for resistance to misuse or reliability; model providers might be liable for the harms caused by blatantly exploitable security gaps; or providers might be obligated to ensure model outputs cannot violate privacy rights.
The Economic Objection
The parliament’s suggestion of article 28b has faced strong resistance on economic grounds, led by the national governments of France and Germany. Their predominant concern is that regulating foundation models might endanger nascent European foundation model development, such as by French MistralAI and German Aleph Alpha.
Read the full story here
Anton Leicht is a doctoral student at Bayreuth University, supervised by Prof. Johanna Thoma. I hope to figure out how democratic, decision-theoretical and ethical principles can guide the rules we set for artificial intelligence systems.
In particular, I’m interested in finding out who should have a voice in how we govern AI today; how we can aggregate these voices into common goals; and how we can shape these goals into effective policy. Aside from artificial intelligence, I have previously worked in - and am still deeply curious about - economic and foreign policy.
Anton Leicht collaborated with Dominik Hermle on this text.