This workstream aims to establish a set of rules, obligations, and mechanisms that incentivize developers of general-purpose AI systems (GPAIS) worldwide to apply their best effort in ensuring the development and deployment of cutting-edge models are conducted in manner that is safe and aligned with fundamental human rights. Through the EU AI Act, this governance regime would shift the practices and culture of developers, regulators, and deployers of GPAIS toward trustworthiness. For these norms to be adopted by developers outside the EU (through the so-called “Brussels Effect”), the rules and obligations must be cost-effective – each additional dollar or hour spent on compliance is justified by a significant increase in the AI systems’ trustworthiness.

The Future Society has emerged as a key advocate for this nuanced regulatory oversight of GPAIs under the AI Act. Our foundational memo, released in January 2022 and entitled “Fantastic Beasts and How to Tame Them,” detailed the considerable challenges GPAIs present to the strategic autonomy of the EU. It underscored the inherent risks to fundamental human rights and public welfare, advocating for preemptive measures. A novel solution presented in the memo was the establishment of the “Navigator Programme.” This initiative is designed to bridge communication between the European Commission’s AI Office personnel and the GPAI providers developing the most powerful – and consequently, the most risky – models, facilitating a more informed and safety-conscious development environment.

Fast-forward to September 2023, against the backdrop of the AI trilogue process, The Future Society articulated a refined strategy in our publication “Heavy is the Head that Wears the Crown”. This extensive blueprint delineates the optimal GPAI governance structure, synthesizing insights from over two years of in-depth research on the topic. It champions a “tiered approach” to GPAI regulation within the EU AI Act, advocating for a balanced methodology where regulatory requirements are set in proportion to their risk potential, thus avoiding undue regulatory burden. The document serves not only as a theoretical guide but as a practical roadmap, crystallizing our research insights into pragmatic procedures, detailing the governance ecosystem, stipulating tier-specific requirements, and offering robust enforcement strategies. This holistic approach ensures a coherent, flexible, and enforceable framework, promoting innovation while safeguarding societal imperatives.

In this context, in October 2023, TFS co-organised a roundtable on the Governance of General-Purpose AI with Stiftung Neue Verantwortung, which was hosted by Dragoş Tudorache (MEP). The event brought together 48 representatives of the European institutions, Member States, industry, academia, and civil society to discuss the risks and responsibilities of GPAI along the value chain, and how to balance these for competitiveness in the EU AI ecosystem.

Related resources

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Endorsements