AI systems present a governance challenge that is unprecedented in scale and complexity. Central to this challenge are the emergent properties of advanced AI systems: state-of-the-art, large-scale AI systems are beginning to demonstrate behaviors that even their developers did not design or predict. As systems become increasingly capable, it will become difficult to forecast how they could be misused, or the manners in which they will interact with society.

Severe and systemic power imbalances magnify the challenge of governance. State-of-the-art AI systems carry substantial commercial advantages, but their development requires enormous sums of capital—attracting primarily private rather than public sector or academic investment. As a result, leading AI and technology companies are racing ahead in research, using their resources to influence policymaking, and hastily deploying untested technology in the interest of market advantage. Unless stronger oversight and more robust laws are implemented, we face the possibility that unfettered development of AI capabilities could lead to incidents of large-scale discrimination or harm.

Work within this thematic area includes developing corporate governance frameworks, contributing to safety-enhancing benchmarking and metrological methods, and providing advisory to leading international organizations (e.g. OECD, UNESCO, GPAI) and standards-setting bodies (e.g. NIST, IEEE) that work in the governance of AI.

Beginning in 2023, we are focusing on governance of risks in the research and development of advanced AI systems, as we believe that this is the stage of the AI lifecycle where risk mitigation is critical to avoid large-scale incidents and ensure safe, ethical, secure and trustworthy AI.

Endorsements

Related resources