In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.
Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We’re pleased to share our memo: Giving Agency to the AI Act.
TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).
Our response put forward national priorities focused on security standards, measurement and evaluation frameworks, and an industry-wide code of conduct for GPAIS development.
In a paper published at the International Workshop on Artificial Intelligence and Intelligent Assistance for Legal Professionals in the Digital Workplace (Legal AIIA), Dr. Bruce Hedin and Samuel Curtis present an argument for distributed competence as a means to mitigate risks posed by AI systems.
Our response emphasized the need for scrutiny in the design and development of general-purpose AI systems (GPAIS). We encourage the implementation of third-party assessments and audits, contestability tools for impacted persons, and a horizontal regulatory approach toward GPAIS.
The draft AI Act approved by the European Parliament contains a number of provisions for which TFS has been advocating, including a special governance regime tailored to general-purpose AI systems. Collectively, these operationalize safety, fairness, accountability, and transparency in the development and deployment of AI systems.
On April 20th, 2023, the Cabinet of Rwanda approved their National AI Policy, which TFS supported in drafting.