TFS joins U.S. NIST AI Safety Institute Consortium

TFS is collaborating with the NIST in the AI Safety Institute Consortium to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote development of trustworthy AI and its responsible use.

Read More

Our 2023 Highlights

We recount our achievements and impact in 2023, spanning policy research, convenings, workshops, advocacy, community-building, report production, interviews, and expert group, conference, and workshop participation.

Read More

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Read More

Strengthening the AI operating environment

In a paper published at the International Workshop on Artificial Intelligence and Intelligent Assistance for Legal Professionals in the Digital Workplace (Legal AIIA), Dr. Bruce Hedin and Samuel Curtis present an argument for distributed competence as a means to mitigate risks posed by AI systems.

Read More

Policy achievements in the EU AI Act

The draft AI Act approved by the European Parliament contains a number of provisions for which TFS has been advocating, including a special governance regime tailored to general-purpose AI systems. Collectively, these operationalize safety, fairness, accountability, and transparency in the development and deployment of AI systems.

Read More

Giving Agency to the AI Act

Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We’re pleased to share our memo: Giving Agency to the AI Act.

Read More

National AI Strategies for Inclusive & Sustainable Development

From 2020 to 2022, TFS supported the development of 3 National AI Strategies in Africa with public sector partners, GIZ, and Smart Africa. These programs build capacity through AI policy, regulatory, and governance frameworks to support countries’ efforts to harness AI responsibly, to achieve national objectives and inclusive and sustainable development goals.

Read More

Leveraging Responsible AI in the Banking Sector in Africa 

Between May and August 2021, The Future Society collaborated with the bank Société Générale, its 16 branches in Africa, and the civic tech Bluenove to up-skill Société Générale employees’ understanding about the risks and benefits linked to the adoption of AI in the banking sector. The three workshops organised throughout the summer cumulated in the production of a manifesto for the responsible use of AI and data. 

Read More

Report Launch: Bridging AI’s trust gaps: Aligning policymakers and companies

Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank). Common understanding between companies and policymakers is key to build a governance framework and protect citizens’ rights. Our survey reveals ethical gaps between company and policy leaders that need to be addressed for the trustworthy adoption of AI across sectors.

Read More

Briefing: A List of AI Governance Levers

Over the past few years AI governance and policy actors including companies, regulators, academics and nonprofits have put forward many different initiatives to ensure AI development and applications have appropriate ethical and safety precautions – enabling ‘trustworthy’ or ‘responsible’ AI. This briefing lists these AI governance approaches rather than analyzes, in order to inform all possible options in a ‘toolbox’.

Read More

AI Spark Lab

Our AI Spark Lab designs and delivers large-scale consultations (involving thousands of collaborators and company stakeholders), targeted design-thinking workshops, as well as corporate innovation labs.

Read More