Skip to content

Main Insight

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

A Blueprint for the European AI Office

October 17, 2023

The latest discussions on the governance of foundation models and general purpose AI often ask “How can we enforce rules that affect cutting-edge foreign providers of such an opaque technology?”

Today, we’re publishing a blueprint for the European AI Office – a centralised institution that could be established through the European Union (EU) AI Act – responsible for overseeing and supporting the implementation and enforcement of the regulation across the EU, notably on transnational issues like general purpose AI.

This work follows our “Giving Agency to the EU AI Act” memo, which analysed different institutional models on their suitability in enforcing the AI Act at the EU level. In particular, we compared a board, which would have relatively limited authority, to an agency, which would have much more enforcement power. In this blueprint, we presume that, as reflected by the European Parliament’s June 2023 negotiating position, the AI Act would establish an AI Office (which, in terms of authority, sits somewhere between an agency and a board).

With this in mind, we sought to answer the question: What mechanisms would enable the European AI Office to function effectively, efficiently, coherently and legitimately? We identify mechanisms that would enable the AI Office to satisfy these objectives. We conducted desk research and expert interviews to analyse various design features of relevant institutions and the proposed European AI Office. Based on a review of historical examples and solicitation of expert opinion, we developed recommendations for the AI Office, spanning legal, structural, financial, functional, and behavioural mechanisms.

Our recommendations are summarised in the figure below:

Related resources

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Giving Agency to the AI Act

Giving Agency to the AI Act

Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We're pleased to share our memo: Giving Agency to the AI Act.

Response to NIST Generative AI Public Working Group Request for Resources

Response to NIST Generative AI Public Working Group Request for Resources

TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).

Response to U.S. OSTP Request for Information on National Priorities for AI

Response to U.S. OSTP Request for Information on National Priorities for AI

Our response put forward national priorities focused on security standards, measurement and evaluation frameworks, and an industry-wide code of conduct for GPAIS development.