Skip to content

Main Insight

In a report for the OECD AI Policy Observatory, The Future Society mapped opportunities and risks along AI value chains to facilitate adoption of the OECD AI Principles

Mapping the AI Value Chain (OECD.AI)

May 8, 2020

The Future Society is leading research with the OECD AI policy team to map global AI value chains. While there is currently a lack of metrics and measurements, a deeper knowledge of actors, how and where value is created, and risks in the wider context of the digital economy is crucial to inform policymakers. An informed understanding of the value chains underlying industrial AI is necessary to pave the way towards smart policymaking.

This February, The Future Society participated in the launch of the OECD AI Policy Observatory (OECD.AI) as members of its Network of Experts on AI. Previously, The Future Society played an instrumental role in developing the OECD AI Principles, now adopted by the G20. In addition to our work with the IEEE Global Initiative on Ethically Aligned Design, we have now contributed to the establishment of two complementary global regimes of principles for AI. 

Related resources

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable to take place in Washington, D.C.

The Fifth Edition of The Athens Roundtable on AI and the Rule of Law will convene in Washington D.C. on November 30th and December 1st, 2023, to examine the risks associated with foundation models and generative AI, and explore governance mechanisms that could serve to reduce these risks.

A Blueprint for the European AI Office

A Blueprint for the European AI Office

We are releasing a blueprint for the proposed European AI Office, which puts forward design features that would enable the Office to implement and enforce the EU AI Act, with a focus on addressing transnational issues like general-purpose AI.

Model Protocol for Electronically Stored Information (ESI)

Model Protocol for Electronically Stored Information (ESI)

The Future Society, with support from IEEE, has developed a model protocol to assist parties seeking to establish the trustworthiness of advanced tools used to review electronically stored information (ESI) in legal discovery.

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

Heavy is the Head that Wears the Crown: A risk-based tiered approach to governing General-Purpose AI

In this blueprint, we explain why a tiered approach makes sense in the EU AI Act and how to build a risk-based tiered regulatory regime for GPAI – the technicalities involved, which requirements should be imposed on their corresponding tiers, and how to enforce them.

Giving Agency to the AI Act

Giving Agency to the AI Act

Earlier this year, we conducted research comparing different institutional models for an EU-level body to oversee the implementation and enforcement of the AI Act. We're pleased to share our memo: Giving Agency to the AI Act.

Response to NIST Generative AI Public Working Group Request for Resources

Response to NIST Generative AI Public Working Group Request for Resources

TFS submitted a list of clauses to govern the development of general-purpose AI systems (GPAIS) to the U.S. NIST Generative AI Public Working Group (NIST GAI-PWG).