Skip to content

Main Insight

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

September 27, 2023

For the past two years, TFS has conducted extensive research on governing general-purpose AI (GPAI) and related foundation models. We compile these insights into a holistic, risk-based, tiered approach for GPAI, which we present in our latest blueprint: “Heavy is the head that wears the Crown”.

An executive summary is available here. The blueprint explains why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them. A summary of our findings is below:

  1. We find there are seven distinct challenges arising mostly in GPAI/genAI overall, ranging from generalisation to concentration of power and misuse.
  2. Most definitions conflate generative AI, foundation models, GPAI, etc. We explain why separating GPAI models from generative AI systems that build upon them is important for proportionality.
  3. We propose 3 tiers: generative AI systems, Type-I GPAI models and Type-II GPAI models (cutting-edge).
  4. In a tiered approach to GPAI regulation, requirements are set on models in proportion to their risk potential. Type-II GPAI models pose different and more severe challenges than Type-I and Generative AI systems; a Type-I GPAI model poses more severe and different challenges than a Generative AI system.
  5. Therefore, Type-II models (currently ~10 providers) must comply with the full set of listed requirements, while Type-I models (currently ~14 providers, incl. 6 from Type-II) have only a subset of these requirements, and generative AI (>400 providers) have an even smaller subset – reflecting risk-based proportionality.
  6. The distinction between tiers is based on the generality of capabilities, which predicts quite well how risky the GPAI model is. It can be approximated by compute used for training (a metric that is readily available internally and predictable, because it is a major cost driver), to update over time until better metrics are available.
  7. Requirements for each tier are summarized in the Executive Summary.
  8. We present additional measures for effective enforcement, open source governance, combinations of GPAI models, and value chain governance.

Related resources

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.

Giving Agency to the AI Act

Giving Agency to the AI Act

Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We're pleased to share our memo: Giving Agency to the AI Act.

Response to NIST Generative AI Public Working Group Request for Resources

Response to NIST Generative AI Public Working Group Request for Resources

TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).

Response to U.S. OSTP Request for Information on National Priorities for AI

Response to U.S. OSTP Request for Information on National Priorities for AI

Our response put forward national priorities focused on security standards, measurement and evaluation frameworks, and an industry-wide code of conduct for GPAIS development.