Skip to content

Main Insight

In these turbulent times, civil society organizations like ours are doubling down on our efforts. Here are a handful of reasons why your financial support to our organization can make a significant difference.

Consider supporting our work

April 10, 2023

Over the past few weeks, the web has been abuzz with AI news. In these turbulent times, civil society organizations like ours are doubling down on efforts to ensure that human rights and fundamental values are not only defended but actively promoted. If you find yourself wondering how you can make a difference, you might consider funding our work. Here are a handful of reasons why your financial support to our organization can make a positive significant difference:

1. AI governance is important. Artificial intelligence is transforming the human experience. Some experts believe that AI will radically improve the world; some think it will open doors to incidents at extreme scales. Many experts, in fact, believe both scenarios are possible. One thing is certain: our future will be the result of human decisions to design, test, and deploy technology. This, we believe, is the collective action problem of our time!

2. AI systems are already causing harm. Some AI systems, such as facial recognition and predictive analytics, have long-standing histories of discriminatory outcomes. In recent years, newer technologies, such as generative text and image systems, have been harnessed to propagate misinformation and orchestrate large-scale cyberattacks. Within only the past few months, frontier language models have exhibited new characteristics, such as deception and power-seeking, which could introduce new risks and harms, while the growing integration of AI systems in technology is likely to make adverse impacts more destructive and widespread.

3. AI governance is tractable. Humans are behind the wheel—changes in human behaviors can steer the course of technological development toward a safer and more equitable future. Responsible investments in AI metrology and safety-oriented research, for example, can help us prepare better-informed regulatory frameworks before more powerful systems are deployed. Public-private cooperation can provide the resources and perspectives needed to tackle the complex task of overseeing increasingly capable AI systems. Industry-wide responsible practices and legislative action can slow the unfettered deployment of unsafe systems. Governments can establish verifiable, enforceable agreements to prevent large-scale hazards. It will require a concerted effort of researchers, policymakers, and companies to ensure that AI systems of the future align with human values and the rule of law.

4. AI governance may be a front-loaded challenge. Laws, policies, and norms have compounding effects. While crafting AI governance will be an ongoing process, getting governance mechanisms in place early is critical. This will necessitate investing heavily in the ‘upfront’ work—conducting research, developing policies and regulations, establishing standards and best practices, and engaging with stakeholders—needed to lay foundations for more robust laws, policies, and norms in the long term, even when many difficult questions remain unanswered. Although this front-loaded effort is onerous, failing to prepare for the technology of the future could prove far more costly.

5. Our work can have a long-term, positive impact. The institutions, laws, and norms we establish now will shape the development and use of AI systems for years to come. A proactive effort could steer technology towards benefiting society broadly. On the other hand, if we fail to prepare adequately or act with foresight, we risk stumbling into a future in which powerful AI systems cause irreversible damage to our social fabric. How we research, discuss, and craft governance frameworks in this early stage may reverberate for decades. This makes AI governance impactful, even if it can feel like just the first steps.

6. We take a prioritized, portfolio-based approach. Our resources are limited, but the challenges and opportunities in AI governance are vast. This is why we prioritize a ‘portfolio’ of work that tackles the most significant and tractable problems. By methodologically assessing the landscape of governance approaches, we aim to focus our efforts on the areas of the greatest potential impact. For example, we may address understudied gaps or build on emerging best practices with multiplier effects. A portfolio approach also allows us to balance diverse work streams, such as researching and advocating for safer industry practices, convening policymakers, companies, and civil society to hone in on fundamental governance challenges, and providing capacity-building and educational programs for judicial operators and the public at large.

7. We are independent. As an independent organization not beholden to companies or governments, we are uniquely well-positioned to work toward AI governance that serves the public interest. We aim to bring impartial analysis and judgment to governance discussions without profit motives or political pressures distorting our work. Our non-affiliated status enables us to function as a neutral convener and collaborator with external stakeholders, including companies, policymakers, and other actors working towards responsible stewardship of AI.

8. We face existing funding gaps. We have a number of potentially high-impact activities queued up, but due to funding constraints, we are currently unable to launch them. By funding our work, you could help us make progress on a range of open questions—and ultimately, help determine whether advanced AI is shaped by commercial interests alone or by a diversity of voices.

9. We are fighting an uphill battle. Corporations are pouring billions of dollars into AI, and their profit incentives can be misaligned with broader public interests. Independent nonprofit organizations are crucial players to ensure that governance solutions serve the public interest, and we rely upon philanthropic support to have an effect commensurate with other stakeholders’ influence.

10. We have a strong track record. Since our incorporation in 2016, TFS has been a major player in the production of foundational AI governance frameworks and policies. Here is a short list of some of our accomplishments:


If you would like to make a financial contribution, please visit our Donate page. If would like to hear more about our plan for impact, please do not hesitate to reach out to us at donate@thefuturesociety.org.

Related resources

Byte-sized updates & we’re hiring!

Byte-sized updates & we’re hiring!

The pace of AI developments has kept us busy, and we’re excited to share what we’ve been up to in this quarter’s newsletter.

Cabinet of Rwanda approves National AI Policy

Cabinet of Rwanda approves National AI Policy

On April 20th, 2023, the Cabinet of Rwanda approved their National AI Policy, which TFS supported in drafting.

Our 2022 Annual Report

Our 2022 Annual Report

Our 2022 Annual Report provides a retrospective of our accomplishments from the past year, and a perspective on the work ahead of us in 2023.

Takeaways from the fourth edition of The Athens Roundtable

Takeaways from the fourth edition of The Athens Roundtable

Held at the European Parliament in Brussels, the dialogue focused on implementation and enforcement of AI governance mechanisms at this critical juncture in human history.

New year. New TFS.

New year. New TFS.

2022 was a year of transition and growth at The Future Society.

Working group publishes A Manifesto on Enforcing Law in the Age of AI

Working group publishes A Manifesto on Enforcing Law in the Age of AI

The Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of "Artificial Intelligence" convened for a second year to draft a manifesto that calls for the effective and legitimate enforcement of laws concerning AI systems.