Grantmakers & Senior Generalists, Global Catastrophic Risks
Coefficient Giving
Remote
USD 151,800-319,500 / year + Equity
Location
Remote - Global
Employment Type
Full time
Department
Global Catastrophic Risks
About Coefficient Giving
Coefficient Giving (formerly Open Philanthropy) is a philanthropic funder and advisor. Since 2014, we’ve directed over $5 billion in grants as part of our mission to help others as much as we can with the resources available to us. We work with a range of donors who share our commitment to cost-effective, high-impact giving. Our current funds include Science and Global Health R&D, Navigating Transformative AI, Biosecurity & Pandemic Preparedness, Abundance & Growth, Farm Animal Welfare, and more. In 2025, we recommended more than $1 billion to high-impact causes.
We’re proud of our track record:
We jump-started the field of AI safety and security and have played a vital role in addressing other existential threats, such as mirror bacteria.
Our grants to evidence-backed global health programs have saved over 100,000 lives, and our farm animal welfare grants have improved the lives of over 3 billion animals.
We supported late-stage clinical trials for the R21 malaria vaccine, now being scaled to protect millions of kids globally.
-
We were the earliest major funder of the YIMBY movement to build more housing. Our grantees have led the charge on major wins like City of Yes in New York, and SB 79 in California, which will enable hundreds of thousands of new housing units.
About the team
Coefficient Giving’s Global Catastrophic Risks (GCR) division houses our teams working on Navigating Transformative AI (spanning technical AI safety research, AI governance and policy, capacity building, and short-timelines special projects) and Biosecurity and Pandemic Preparedness. The GCR team expects to move around $1 billion in grants across these funds in 2026, and we expect this figure to grow significantly in the coming years. A core premise of our work is that if a global catastrophe caused by transformative AI or biotechnology could be prevented by funding and isn't, we consider it our responsibility.
Right now, our biggest constraint is people, not funding. On average, each grantmaker on the team is responsible for more than $10 million/year being allocated to high-impact work, and we're significantly understaffed relative to the opportunities in front of us. Many grantmakers move far more than that in their first year on the job, e.g. >$50 million. New grantmakers can have substantial counterfactual impact by helping to create organizations and sub-fields that wouldn't otherwise exist, driving priority special projects that wouldn't otherwise happen, and helping some of the most impactful organizations in the world scale ambitiously. The marginal work we currently can't get to is work we think is critically important, so every additional strong hire matters enormously.
Timelines to transformative AI appear to be shortening, and the work our teams fund today will shape how well humanity navigates some of the most consequential decisions of the coming decade. In addition to growing our team, we’ve also launched an internal project to help the existing team operate with substantially more speed, ambition, and urgency. We expect this to meaningfully increase the impact each new hire can have.
We're hiring for multiple roles across several of our teams:
-
AI Governance & Policy: Works to shape the norms, policies, laws, and institutions that govern how the most capable AI systems are developed and deployed.
We’re hiring generalist grantmakers, who would identify and evaluate funding opportunities across the full range of AI governance and policy issues; U.S. policy grantmakers, focused on building up the U.S. policy ecosystem and political coalitions necessary to ensure beneficial AI outcomes; a China specialist, to support work that facilitates Chinese contributions to AI safety and useful cooperation between China and the West; and an AI information security specialist to support infosecurity work related to AI safety and control. We’re also hiring a Chief of Staff to help steer and amplify the team as we scale.
Short Timelines Special Projects: Focuses on driving forward new projects that aren't sufficiently covered or invested in by other teams at Coefficient Giving, and which seem likely to be especially important if timelines to transformative AI are short.
GCR Capacity Building: Focuses on growing and strengthening research fields related to navigating transformative AI, and the broader community and ecosystem focused on global catastrophic risks.
Biosecurity and Pandemic Preparedness: Works to reduce the risk of catastrophic biological events, particularly those arising from the deliberate misuse of biotechnology and AI.
GCR Executive Team: Oversees the entire department (including all the teams above), sets strategy, and works to ensure the GCR division achieves its goal of preventing catastrophic harm. We’re hiring for a senior generalist role to lead special projects and boost the impact of the wider division.
If you want to know more about what this work looks like day-to-day, you can read profiles of a few grantmakers across our GCR teams here.
Common features of these roles
There is significant overlap in the types of responsibilities, activities, and skills required for roles on each of the grantmaking teams listed above and the GCR executive team. This section describes the responsibilities applicable to all teams, and the following sections describe aspects unique to specific teams.
Because these roles share many of the same underlying skills, we encourage candidates to indicate interest in every team that could be a plausible fit. We will evaluate candidates for all of their preferred teams through a single streamlined process, with specific team placement determined in the final stages. Candidates can update their preferences at any point during the round.
Across all our teams, grantmakers might:
Identify the most important projects and organizations that need to exist, and make them happen. Our grantmaking work is increasingly proactive — scoping out priority projects, finding or developing the right founders, and actively building new initiatives rather than waiting for proposals to come in.
Investigate grant opportunities. Essentially, a grant investigation is a focused, practical research project aimed at answering the question, “Should this project be funded, and at what level?”
Own special projects that go beyond grantmaking. Some examples include incubating impactful projects, headhunting founders, and working with for-profit entities to achieve a specific outcome in the world.
Design, implement, and advertise new grantmaking initiatives. One recent example is launching an incubator to develop new leaders in critical and neglected fields.
Conduct research to inform program strategy, such as helping Coefficient Giving investigate a new grantmaking area within the global catastrophic risks ecosystem or evaluate the historical cost-effectiveness of a certain kind of grant.
Build and maintain relationships in the field, ensuring that feedback flows between us, our grantees, and other stakeholders.
We also have two senior generalist openings (Chief of Staff, AI Governance & Policy and Program Officer, GCR Executive Team) with different responsibilities described in the dedicated sections below. These are not grantmaking roles, but we think they are at least as impactful by working very closely with program leadership to amplify the impact of the relevant teams.
We expect to make hires at any of four levels:
Senior Program Associates investigate funding opportunities and proactively develop new ideas that lead to grants. On occasion, they also carry out research and evaluation work, or other highly impactful non-grantmaking projects.
Associate Program Officers lead projects with increased autonomy, manage particularly complex projects, own sub-areas of their team’s strategy, and manage Program Associates.
Program Officers own a significant fraction of their team’s strategy, design and implement new grantmaking initiatives, and/or manage several direct reports.
Senior Program Officers fully own a program area within one of our larger teams or lead a small team. We’re excited about hiring at this level in exceptional cases.
You might be a good fit for these roles if most of the following applies to you:
Familiarity with the global catastrophic risks ecosystem. You are interested in, have spent time engaging with, and are motivated to work on catastrophic risks posed by transformative AI and/or biotechnology. (Note: these roles do not require that you have technical expertise — for example, you do not need to be able to replicate a technical paper.)
Ownership and agency. You are accustomed to taking full responsibility and ownership over poorly scoped projects and/or areas of ongoing work. You will push to make the right thing happen and to move things forward, even if it requires rolling up your sleeves to do something unusual, difficult, and/or time-consuming.
Ambition and speed. You generate creative, bold ideas to capture more impact, and you’re comfortable moving quickly to execute on ambitious plans that could make a significant difference in the world.
Social awareness and flexibility. You can effectively communicate and build relationships with people across a wide range of professional contexts. You feel excited by the idea of developing a broad professional network among and deep understanding of key stakeholders relevant to global catastrophic risks, and collaborating with them to deliver on your strategic priorities.
Critical thinking. You have strong analytical and critical thinking skills, especially the ability to quickly grasp complex issues, find the best arguments for and against a proposition, and skeptically evaluate claims. You should feel comfortable thinking in terms of expected value and reasoning quantitatively and probabilistically about tradeoffs and evidence.
Good judgment. You can identify and focus on the most important considerations, have good instincts about due diligence and efficiency, and can form reasonable, holistic perspectives on people and organizations.
Clear communication. You communicate in a clear, information-dense, and calibrated way, with good reasoning transparency, both in writing and in person.
We expect all our staff to:
Put our mission first, and act with urgency to help us realize our ambitious goals.
Work to model our operating values of ownership, openness, calibration, and inclusiveness.
The ideal candidates for these positions will possess many of the skills and experiences described above and in the role-specific sections below. We don’t require candidates to meet all these criteria, and firmly believe there is no such thing as a “perfect” candidate. If you are on the fence about applying because you are unsure whether you are qualified, we strongly encourage you to apply.
AI Governance & Policy
The AI Governance & Policy (AIGP) team, led by Luke Muehlhauser, funds work to shape the norms, policies, laws, and institutions that govern how the most capable AI systems are developed and deployed. In 2025, the AIGP team moved over $140 million to impactful organizations and projects, making it one of the largest philanthropic funders of frontier AI policy. Its work includes AI governance research to improve our collective understanding of how to achieve effective AI governance, and AI governance policy and practice to improve the likelihood that good ideas are actually implemented by companies, governments, and other actors. Our portfolio spans U.S. and international policy research and advocacy, field-building, fundamental strategic research, technical governance (e.g. evaluations, verification mechanisms), and more.
Compared to the other teams hiring in this round, AIGP grantmaking tends to involve a strong understanding of key institutions like frontier AI companies, governments, international bodies, and the political dynamics within them. Familiarity with relevant technical domains like machine learning or hardware can be valuable, as can knowledge of policy specifics.
While we’re open to hiring talented generalists, the AIGP team is particularly interested in people with:
Prior experience in policy, for example in a government role or at a think tank. Experience in U.S. policy would be particularly valuable.
A technical background relevant to AI safety (e.g. in information security or frontier AI hardware).
Experience working on AI governance or policy at a frontier AI company.
U.S. policy specialist grantmakers
We are also hiring for one or more grantmakers focused specifically on grantmaking for U.S. policy. We're open to people from a wide variety of backgrounds, though we have a strong preference for candidates based in Washington, D.C.
Specific profiles we're interested in include:
A technical AI governance specialist who can bridge deep knowledge of AI capabilities and risks with the D.C. policymaking world.
A coalition-building expert with knowledge of the political economy of AI who can map how political coalitions form and shift, and invest in building durable cross-partisan support for AI governance.
A political and campaign strategist who has experience running or funding issue-area advocacy.
China specialist
We are also hiring for a China specialist to help us find ways to support Chinese contributions to AI safety and useful cooperation between China and the West, including through forums like the International Dialogues on AI Safety. Applicants for this role should be fluent in Mandarin and have a strong educational and/or professional background in China studies, particularly Chinese politics, policy, business, economics, and/or recent history.
AI information security specialist
We are also hiring for an AI information security grantmaker. Our work on "AI infosec" includes safeguarding model weights and algorithmic breakthroughs, preventing system poisoning or sabotage, securing training data and compute resources, addressing vulnerabilities across the full machine learning supply chain (from compute resources to MLOps), enabling secure third-party access for audits and evaluations, and ensuring high security standards for other governance-enabling techniques (e.g. international agreement verification mechanisms).
This portfolio will likely span technical research, policy development, field-building initiatives, and ecosystem support. Our previous grants have supported RAND's Meselson Center (which authored Securing Model Weights), security fieldbuilding projects such as Heron, and benchmarks like Cybench, CVEbench, and BountyBench.
Chief of Staff
We are also hiring for a Chief of Staff, who will serve as a close thought partner to Luke Muehlhauser, produce nuanced recommendations for senior leadership, and design organizational infrastructure for the rapidly growing AIGP team. This is a senior generalist role, not a grantmaking role.
The desired traits for this role will be similar to the grantmaker positions, but responsibilities will be different, and likely include:
Acting as a force multiplier for Luke by helping him focus on AIGP’s top priorities, advising him on critical decisions, and owning initial drafts or full segments of his portfolio.
Project managing large initiatives across AI teams, such as strategy refreshes, public communications, and process updates that require significant stakeholder coordination.
Working closely with Luke to shape the strategy and structure of the AIGP team, including by identifying new priority areas and deciding who should own them.
Reviewing internal processes to accelerate the team’s grantmaking and raise the ambition and impact of our grantees.
Designing and leading hiring rounds end-to-end, from crafting the JD to recommending the final decision.
Managing grantmakers and other staff at various levels of seniority.
We are open to a wide range of experience levels, from early-career to very senior, and the shape of the role will be tailored to the successful candidate. Strong candidates will combine sharp strategic judgment, a clear track record of ownership and execution, and the ability to effectively support teams as they scale. There is no location requirement for this role.
AIGP is open to remote hires and does not have a strong location preference unless otherwise indicated, though proximity to Washington, D.C. or San Francisco can be helpful given the nature of the policy work.
Short Timelines Special Projects
The Short Timelines Special Projects (STSP) team, led by Claire Zabel, focuses on driving forward new projects that seem likely to be especially important if timelines to transformative AI are short.
The STSP team’s remit is extremely broad, and its mission is to ensure that if money can improve outcomes in a world of short AI timelines, it does. Relative to other teams, STSP is less focused on grantmaking in particular — though we view grantmaking as a valuable tool — and is more open to using a zoomed-out position and financial leverage to "do what it takes". In addition to grantmaking, STSP is open to conducting research, incubating impactful projects, supporting coordination efforts, headhunting, working with for-profit entities, and more.
The STSP team is particularly interested in hiring people with the following traits:
Openness to working on a wide variety of projects rather than specializing deeply in one sub-area.
A relatively deep and broad understanding of AI strategy and AI futurism.
Excitement about getting new projects off the ground and working in relatively unscoped areas with few collaborators.
The STSP team prefers candidates to be based in the San Francisco Bay Area, but is open to considering candidates in other locations. We’ll support candidates with the costs of relocation to the Bay.
GCR Capacity Building
The Global Catastrophic Risks Capacity Building (GCRCB) team focuses on growing and strengthening the fields of researchers and practitioners working on Navigating Transformative AI as well as the broader ecosystem concerned with global catastrophic risks. We believe capacity building is extremely leveraged, and much of the GCR team’s total impact to date is downstream of the GCRCB team’s earlier work.
Relative to other teams, the GCRCB team focuses more on building infrastructure and pipelines to attract new talent and support people and organizations seeking to address risks from transformative AI and biosecurity. Compared to other roles in this round, the GCRCB team’s work is often better suited to a generalist mindset (as opposed to deep, narrow specialization). If you’re excited to think about a wide range of ways to support the broad GCR ecosystem and remain flexible about how you deliver impact, the GCRCB team might be a good fit. This is both because GCRCB’s own portfolio is diverse, and because GCRCB often helps other teams with their grantmaking.
The GCRCB team is particularly interested in hiring people with direct experience in GCR-relevant fields, such as AI safety, biosecurity, or related research and nonprofit communities.
GCRCB is open to remote hires and does not have a strong location preference.
Biosecurity and Pandemic Preparedness
The Biosecurity and Pandemic Preparedness (BPP) team, led by Andrew Snyder-Beattie, funds work to reduce the risk of catastrophic biological events, particularly those arising from the deliberate misuse of biotechnology. BPP’s work focuses on prevention (including biosecurity policy and governance, and safeguards to mitigate AI-enabled bio risks), response in the event of a catastrophe (especially personal protective equipment, biohardening, and detection), as well as field-building to support the broader ecosystem working on these problems.
Compared to the other teams hiring in this round, BPP focuses specifically on catastrophic biological threats, while still supporting a wide range of approaches within that space. The team’s work is often entrepreneurial, with a strong emphasis on identifying gaps and helping to launch new organizations or initiatives to address them. While familiarity with biosecurity, public health, or biotechnology can be helpful, it is not required, and many of BPP’s most effective grantmakers come from non-bio backgrounds. That said, for some areas of work, particularly at the intersection of AI and biology, relevant technical or domain expertise can be a significant asset.
For this round, BPP is particularly interested in hiring individuals to work at the intersection of AI and biology or biosecurity field-building.
BPP prefers hires to be in-person in Washington, D.C. While significant remote work is possible, we want hires to spend meaningful time in D.C. to connect with the team and our network, and would cover expenses for such trips. Regardless of primary location, the role may occasionally require travel to other locations, both within the U.S. and internationally.
GCR Executive Team
The GCR executive team, led by Emily Oehlsen and George Rosenfeld, leads the Global Catastrophic Risks division. The team sets high-level strategy, manages the program teams, and owns high-priority special projects to ensure the division can run hard at its goals of navigating transformative AI and reducing catastrophic biorisk. The executive team is responsible for the whole division, and works on projects and decisions that determine which sub-fields end up being prioritized, how well the team can execute, and how hundreds of millions of dollars are spent.
Unlike most other roles in this round, this is a senior generalist role rather than a grantmaking role. It provides direct access to senior leadership and decision-making across the division, has the potential to amplify the impact of the whole GCR team, and offers a platform for leadership and strategy work.
The team is open to candidates at a wide range of experience levels, from early-career to very senior, and the shape of the role will be tailored substantially to the successful candidate.
Responsibilities at more junior levels are likely to focus on:
Preparing Emily and George for meetings with high-stakes external stakeholders (e.g. major funders, frontier AI companies, senior policymakers).
Writing strategic memos that shape major decisions for the division, such as identifying the most important neglected priorities and recommending how to address them.
Reviewing internal processes to accelerate the division's grantmaking and drafting communications to raise the ambitions of grantees.
Identifying opportunities to use AI to automate significant parts of the team's work, leveraging growing AI capabilities to keep pace with a rapidly changing world.
At more senior levels, responsibilities could additionally or instead include:
Identifying and executing major pivots and special projects the division should undertake in the run-up to transformative AI.
Helping to set division-wide strategy, including on questions like the division's house view on AI timelines and how the team should respond to major shifts in the AI landscape.
Leading investigations to identify the division's next area of expansion, and potentially spinning up a new team to run it.
Managing team leads or other senior staff, and developing outreach strategies to recruit senior talent into the division's highest-priority open positions.
Depending on experience, successful applicants will be hired at the Associate Program Officer, Program Officer, or Senior Program Officer level. The team is open to considering alternate titles based on candidate seniority and preference (e.g. Special Projects Lead, Chief of Staff, Strategy Lead). Strong candidates will combine sharp strategic judgment, a clear track record of ownership and execution, and the ability to push teams to aim higher and move faster.
There is no hard location requirement for this role, though the team prefers candidates based in (or willing to relocate to) Washington, D.C., London, or the San Francisco Bay Area — and/or candidates who are able to travel frequently to those locations to facilitate regular in-person work.
Role details & benefits
-
Compensation: The baseline compensation for the various levels of these roles are:
The starting compensation for a Senior Program Associate is $151,800.00 – $189,000.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500.
The starting compensation for an Associate Program Officer is $204,500.00 – $254,000.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500.
The starting compensation for a Program Officer is $225,000.00 – $280,000.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500.
The starting compensation for a Senior Program Officer is $257,000.00 – $319,500.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500.
The starting compensation for the Chief of Staff role on the AIGP team is $231,000.00 - $280,000.00 depending on seniority and location, of which 15% is paid as an unconditional 401k grant, up to $24,500.
All compensation will be distributed in the form of take-home salary for internationally-based hires.
-
Time zones and location: We offer remote work in many countries, and we are open to hires outside the U.S.
We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).
-
Benefits: Our benefits package includes:
Excellent health insurance (we cover 100% of premiums within the U.S. for you and any eligible dependents) and an employer-funded Health Reimbursement Arrangement for certain other personal health expenses.
Dental, vision, and life insurance for you and your family.
Four weeks of PTO recommended per year.
Four months of fully paid family leave.
A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive. This policy also includes a productivity benefit, which provides a set amount for staff to expense items that enhance their productivity.
A continual learning policy that encourages staff to spend time on professional development with related expenses covered.
Support for remote work — we’ll cover a remote workspace outside your home if you need one, or connect you with a Coefficient Giving coworking hub in your city. We currently have offices in San Francisco and Washington D.C., and multiple staff working from several other cities in the U.S. and elsewhere.
We can’t always provide every benefit we offer U.S. staff to international hires, but we’re working on it (and will usually provide cash equivalents of any benefits we can’t offer in your country).
Start date: The start date is flexible, and we may be willing to wait for an extended period of time for the best candidate, though we’d prefer successful candidates to start as soon as possible after receiving an offer.
We aim to employ people with many different experiences, perspectives, and backgrounds who share our passion for accomplishing as much good as we can. We are committed to creating an environment where all employees have the opportunity to succeed, and we do not discriminate based on race, religion, color, national origin, gender, sexual orientation, or any other legally protected status.
If you need assistance or an accommodation due to a disability, or have any other questions about applying, please continue to contact jobs@coefficientgiving.org.
Please apply by 11:59 p.m. Pacific Time on May 17, 2026 to be considered.
U.S.-based staff are typically employed by Coefficient Giving LLC, which is not a 501(c)(3) tax-exempt organization. As such, this role is unlikely to be eligible for public service loan forgiveness programs.
We may use AI to assist in the initial screening of applications, including to detect whether candidates have used AI models in drafting their application. Decisions are always made by a human on our team.
If you have any questions about our use of AI tools, you can email jobs@coefficientgiving.org.