Skip to content

Main Insight

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

September 27, 2023

For the past two years, TFS has conducted extensive research on governing general-purpose AI (GPAI) and related foundation models. We compile these insights into a holistic, risk-based, tiered approach for GPAI, which we present in our latest blueprint: “Heavy is the head that wears the Crown”.

An executive summary is available here. The blueprint explains why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them. A summary of our findings is below:

  1. We find there are seven distinct challenges arising mostly in GPAI/genAI overall, ranging from generalisation to concentration of power and misuse.
  2. Most definitions conflate generative AI, foundation models, GPAI, etc. We explain why separating GPAI models from generative AI systems that build upon them is important for proportionality.
  3. We propose 3 tiers: generative AI systems, Type-I GPAI models and Type-II GPAI models (cutting-edge).
  4. In a tiered approach to GPAI regulation, requirements are set on models in proportion to their risk potential. Type-II GPAI models pose different and more severe challenges than Type-I and Generative AI systems; a Type-I GPAI model poses more severe and different challenges than a Generative AI system.
  5. Therefore, Type-II models (currently ~10 providers) must comply with the full set of listed requirements, while Type-I models (currently ~14 providers, incl. 6 from Type-II) have only a subset of these requirements, and generative AI (>400 providers) have an even smaller subset – reflecting risk-based proportionality.
  6. The distinction between tiers is based on the generality of capabilities, which predicts quite well how risky the GPAI model is. It can be approximated by compute used for training (a metric that is readily available internally and predictable, because it is a major cost driver), to update over time until better metrics are available.
  7. Requirements for each tier are summarized in the Executive Summary.
  8. We present additional measures for effective enforcement, open source governance, combinations of GPAI models, and value chain governance.

Related resources

Springtime Updates

Springtime Updates

A leadership transition, an Athens Roundtable report, and a few more big announcements!

Leadership transition announcement

Leadership transition announcement

Earlier this year, after an extensive search for The Future Society’s next Executive Director, the Board of Directors selected Nicolas Moës, former Director of EU AI Governance, to fill the role.

TFS feedback reflected in historic UN General Assembly resolution on AI

TFS feedback reflected in historic UN General Assembly resolution on AI

Earlier this year, we provided feedback on a draft UN General Assembly resolution on AI. Last week, an updated resolution that reflected our feedback, co-sponsored by more than 120 Member States, was adopted by consensus.

TFS joins U.S. NIST AI Safety Institute Consortium

TFS joins U.S. NIST AI Safety Institute Consortium

TFS is collaborating with the NIST in the AI Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.

Towards Effective Governance of Foundation Models and Generative AI

Towards Effective Governance of Foundation Models and Generative AI

We share highlights from the fifth edition of The Athens Roundtable on AI and the Rule of Law, and we present 8 key recommendations that emerged from discussions.

Launch event for the Journal of AI Law and Regulation (AIRe)

Launch event for the Journal of AI Law and Regulation (AIRe)

TFS is partnering with Lexxion Publisher to launch The Journal of AI Law and Regulation (AIRe), covering key legislative developments in AI governance and providing an impartial platform for discussing the role of law and regulation in managing the challenges and opportunities of AI's impact on society.