Skip to content

Main Insight

Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We're pleased to share our memo: Giving Agency to the AI Act.

Giving Agency to the AI Act

August 22, 2023

Earlier this year, TFS carried out a study to compare institutional models for a European Union-level body responsible for overseeing the implementation and enforcement of the AI Act. This research is presented in our memo: Giving Agency to the AI Act.

Throughout the two years since the European Commission proposed the Act, one substantive element of negotiations has been the institutional model (such as a board, office, or agency) that would be provisioned by the Act and responsible for overseeing and supporting its implementation and enforcement. The Commission’s initial introduction of a “European AI Board” in April 2021 was followed by numerous calls to “strengthen it” – to instead institutionalise a more authoritative body, such as an office or agency, and to assign to it additional tasks and responsibilities.

To provide clarity on this issue, TFS conducted research on how these institutional models differ, and to what end each would be suited to fulfill the functions and responsibilities expected of the institution. In this memo, we compare two models, a board versus an agency, representing opposite sides of the spectrum in terms of authority. Grounding this in topical issues, we assess an additional dimension considered in AI Act negotiations: how these options compare with and without the introduction of a compliance function (one or more on-site staff that develop and enforce relevant internal policies) for regulated entities that pose transnational compliance issues.

Our memo presents our evaluation that led to our findings: that an AI Agency with dedicated staff and legal personhood would on balance be more suitable than an AI Board. Furthermore, we find that this is even more so the case if larger regulated entities are required, per the AI Act, to establish an internal compliance function. In our memo, we provide preliminary recommendations for mechanisms that could enhance an AI Agency’s legitimacy at the member-state level, including hearings in national parliaments, transparency measures and consultation with national – as opposed to only EU-level – stakeholders.

Related resources

Heavy is the Head that Wears the Crown

Heavy is the Head that Wears the Crown

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Response to NIST Generative AI Public Working Group Request for Resources

Response to NIST Generative AI Public Working Group Request for Resources

TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).

Response to U.S. OSTP Request for Information on National Priorities for AI

Response to U.S. OSTP Request for Information on National Priorities for AI

Our response put forward national priorities focused on security standards, measurement and evaluation frameworks, and an industry-wide code of conduct for GPAIS development.

Strengthening the AI operating environment

Strengthening the AI operating environment

In a paper published at the International Workshop on Artificial Intelligence and Intelligent Assistance for Legal Professionals in the Digital Workplace (Legal AIIA), Dr. Bruce Hedin and Samuel Curtis present an argument for distributed competence as a means to mitigate risks posed by AI systems.

Response to U.S. NTIA AI Accountability Policy Request for Comment

Response to U.S. NTIA AI Accountability Policy Request for Comment

Our response emphasized the need for scrutiny in the design and development of general-purpose AI systems (GPAIS). We encourage the implementation of third-party assessments and audits, contestability tools for impacted persons, and a horizontal regulatory approach toward GPAIS.

Policy achievements in the EU AI Act

Policy achievements in the EU AI Act

The draft AI Act approved by the European Parliament contains a number of provisions for which TFS has been advocating, including a special governance regime tailored to general-purpose AI systems. Collectively, these operationalize safety, fairness, accountability, and transparency in the development and deployment of AI systems.