AI systems present a governance challenge that is unprecedented in scale and complexity. Central to this challenge are the emergent properties of advanced AI systems: state-of-the-art, large-scale AI systems are beginning to demonstrate behaviors that even their developers did not design or predict. As systems become increasingly capable, it will become difficult to forecast how they could be misused, or the manners in which they will interact with society.

Severe and systemic power imbalances magnify the challenge of governance. State-of-the-art AI systems carry substantial commercial advantages, but their development requires enormous sums of capital—attracting primarily private rather than public sector or academic investment. As a result, leading AI and technology companies are racing ahead in research, using their resources to influence policymaking, and hastily deploying untested technology in the interest of market advantage. Unless stronger oversight and more robust laws are implemented, we face the possibility that unfettered development of AI capabilities could lead to incidents of large-scale discrimination or harm.

Work within this thematic area includes developing corporate governance frameworks, contributing to safety-enhancing benchmarking and metrological methods, and providing advisory to leading international organizations (e.g. OECD, UNESCO, GPAI) and standards-setting bodies (e.g. NIST, IEEE) that work in the governance of AI.

Beginning in 2023, we are focusing on governance of risks in the research and development of advanced AI systems, as we believe that this is the stage of the AI lifecycle where risk mitigation is critical to avoid large-scale incidents and ensure safe, ethical, secure and trustworthy AI.

Endorsements

Related resources

Takeaways from the fourth edition of The Athens Roundtable

Takeaways from the fourth edition of The Athens Roundtable

Held at the European Parliament in Brussels, the dialogue focused on implementation and enforcement of AI governance mechanisms at this...

Mapping the AI Value Chain (OECD.AI)

Mapping the AI Value Chain (OECD.AI)

In a report for the OECD AI Policy Observatory, The Future Society mapped opportunities and risks along AI value chains...

Report Release with the Global Partnership on AI (GPAI) on Responsible AI and AI in Pandemic Response

Report Release with the Global Partnership on AI (GPAI) on Responsible AI and AI in Pandemic Response

The Future Society had the privilege to support the Global Partnership on AI (GPAI) Responsible AI and Pandemic Response working...

Bridging the AI Trust Gap: Aligning Policymakers and Companies

Bridging the AI Trust Gap: Aligning Policymakers and Companies

This project evaluates how ethical priorities in specific AI use cases might differ between industry and policy leaders. This is...

Classifying AI systems used in COVID-19 response

Classifying AI systems used in COVID-19 response

To aid in our information-gathering efforts, starting today, we are inviting those associated with the development of AI systems used in...

A Global Civic Debate on Governing the Rise of Artificial Intelligence

A Global Civic Debate on Governing the Rise of Artificial Intelligence

The Future Society and its AI Initiative coordinated an unprecedented 7-month civic consultation open to the public to better understand...