hero

Hiring partners are below, but we're here to help!

First, submit your resume to us directly so we can help make personal intros. We'll be able to vouch for your candidacy and can encourage partners to review your profile! From there, share this with any mission-driven job-seekers in your network; we'd be glad to support them, too.
companies
Jobs

Senior Generalist Roles on our Global Catastrophic Risks Team

Open Philanthropy

Open Philanthropy

Remote
USD 149,903.24-188,255.37 / year
Posted on Nov 1, 2025

Location

Remote - Global

Employment Type

Full time

Department

Grantmaking - Global Catastrophic Risks

Open Philanthropy’s Global Catastrophic Risks (GCR) division works to reduce the likelihood of events that could cause major global harm. This umbrella houses our teams working on Navigating Transformative AI, Biosecurity and Pandemic Preparedness, GCR Capacity Building, and Forecasting, as well as other special projects related to catastrophic risks. The GCR team expects to move over $500 million in grants in these focus areas in 2025, and we expect this figure to grow significantly in the coming years.

As Open Philanthropy’s GCR portfolio grows rapidly, leadership capacity has become an increasingly important constraint. Hiring senior generalists into roles like Chief of Staff has helped us to address this bottleneck, and we think bringing in more talented generalists can increase the team’s impact on some of the world's most pressing problems. We’re looking for people with strong ownership, strategic thinking, stakeholder management, and leadership skills.

We are therefore hiring up to four new generalists to drive key initiatives, improve our output and decision-making across the team, and maximize the impact of our portfolio as it grows:

  • Chief of Staff, Global Catastrophic Risks: joining the executive team of the division, this role will work closely with Emily Oehlsen and George Rosenfeld to lead the wider GCR team, with a focus on supporting our existing programs, developing a forward-looking strategy, and growing the team and its impact.

  • Chief of Staff, Technical AI Safety: joining our rapidly growing technical safety team, this role will work closely with Peter Favaloro and focus on accelerating the team’s work, with a focus on organization, strategy, and stakeholder management. Technical experience is not required.

  • Chief of Staff, Short Timelines Special Projects: joining a new special projects team, this role will work closely with Claire Zabel and will require a greater degree of agility and a higher level of context on AI futurism. It will focus on getting new projects off the ground and growing a new team to help maximize Open Philanthropy’s impact if timelines to reach AGI are short.

  • Associate Program Officer, AI Governance & Policy: joining our largest AI team, this role will work closely with Alex Lawsen. Depending on fit and interest, you may focus on supporting team growth and management or lean toward identifying and developing new grantmaking opportunities. For this role, we are open to applicants who are earlier in their careers than others in this round.

The deadline for applications is 11.59 p.m. Pacific Time on Thursday, November 20.

If you think you might be a good fit for one or more of these roles, we encourage you to apply!

About Open Philanthropy

Open Philanthropy is a philanthropic funder and advisor; our mission is to help others as much as we can with the resources available to us. We stress openness to many possibilities and have chosen our focus areas based on importance, neglectedness, and tractability. Our current giving areas include navigating transformative AI, global health and development, scientific research, global public health policy, farm animal welfare, and biosecurity and pandemic preparedness. In 2024, we recommended $650 million to high-impact causes, and we’ve recommended over $4.8 billion in grants since our formation.

Common features of these roles

We expect there to be significant overlap in the types of responsibilities, activities, and skills for each of these roles. In this section, we will describe the shared elements that apply to all the roles, and in the following sections, we will describe the aspects that are unique to each opportunity.

Because there is significant overlap, we strongly encourage qualified candidates to apply to multiple roles. We will evaluate candidates for all their preferred roles through a single streamlined process until the final evaluation stages, and candidates can update their preferences at any point during the round.

In any of these roles, you would act as a trusted advisor and collaborator, help to prioritize and manage tasks, own important initiatives, provide thought partnership on strategic decisions, and act as an extension of your principal in meetings and communications.

We expect that an exceptional hire could grow in a variety of directions within the organization, e.g. overseeing a new grantmaking workstream, taking on line management of team members, or spinning up and leading new funding areas.

Key Responsibilities

The exact responsibilities of each role will depend on the specific team and hire and will evolve over time, but core duties will likely include:

  • Strategic input and thought partnership. As a thought partner to your principal, you will offer proposals and recommendations on issues vital to their work, advise on team questions and strategic decisions, and develop solutions for other hurdles as needed.

  • Driving critical projects. You will lead and accomplish key initiatives and special projects, from defining problems to proposing solutions to executing the plans. The precise selection and scope of your assigned projects will depend on your team, abilities, and preferences.

  • Stakeholder management. You will build relationships with stakeholders across Open Philanthropy and the broader ecosystem. You’ll relay their requests and feedback to your principal and communicate your principal’s vision and objectives in return.

  • Organization and executive support. You will help determine the highest-impact things for you and your principal to accomplish and prioritize ruthlessly to get them done. You’ll seek out ways to be a “force multiplier” for your principal, including improving their efficiency and taking tasks off their plate where possible. This might involve determining who needs to do what, developing meeting agendas and briefing documents, owning or assigning follow-up tasks from meetings, and identifying process improvements. In many cases, you’ll propose improvements for our (excellent) Operations team to execute.

A typical "day in the life" might involve:

  • Setting daily goals for yourself and your principal.

  • Joining meetings with your principal (1-3 hours) and recommending appropriate next steps: what to delegate, what to track, and what requires follow-up.

  • Focus time to push forward your priority projects.

  • Co-working (in person or remotely) with your principal, including discussions of key decisions and challenges.

We’re looking for people who:

  • Are highly organized, reliable, and comfortable juggling competing priorities.

  • Are able to think strategically and provide useful input on a wide range of questions.

  • Are able to take on large, unscoped projects with relatively little oversight, and proactively drive them to completion.

  • Are able to comfortably work in close proximity with senior leaders and handle confidential information with discretion.

  • Have strong judgment and can make sound recommendations on complex questions.

  • Have a "service mindset" and a willingness to act as a force multiplier for others.

  • Have strong interpersonal and communication skills, and are able to represent your principal in writing and in person with both external and internal stakeholders.

  • Have a collaborative work style and eagerness to learn.

  • [Ideally] Are broadly familiar with the GCR and/or AI safety ecosystem and the core concepts behind its work.

  • [Ideally] Have previous experience with Chief of Staff-style roles or other generalist work such as operations, product management, research management, consulting, strategy, people leadership, etc.

We expect all our staff to:

  • Put our mission first, and act with urgency to help us realize our ambitious goals.

  • Work to model our operating values of ownership, openness, calibration, and inclusiveness.

The ideal candidates for these positions will possess many of the skills and experiences described above and in the role-specific sections below. However, we don't require candidates to meet all these criteria, and firmly believe there is no such thing as a "perfect" candidate. If you are on the fence about applying because you are unsure whether you are qualified, we strongly encourage you to apply.

Working at Open Philanthropy

The information below applies to all roles. More specific details about compensation, location, and other key information are listed underneath the individual roles.

Our benefits package includes:

  • Excellent health insurance (we cover 100% of premiums within the U.S. for you and any dependents).

  • Dental, vision, and life insurance for you and your family.

  • Four weeks of PTO recommended per year.

  • Four months of fully paid family leave.

  • A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive.

  • Support for remote work (for roles that can be done remotely) — we’ll cover a remote workspace outside your home if you need one, or connect you with an Open Phil co-working hub in your city.

  • We can’t always provide every benefit we offer U.S. staff to international hires, but we’re working on it (and will usually provide cash equivalents of any benefits we can’t offer in your country).

Across roles, we value staff who communicate clearly and honestly about what they think, are comfortable giving and receiving feedback, and are interested in taking ownership of their work and proactively seeking ways to help Open Philanthropy meet its goals. For more information about the qualities we look for in employees at Open Philanthropy, see our operating values.

We aim to employ people with many different experiences, perspectives, and backgrounds who share our passion for accomplishing as much good as we can. We are committed to creating an environment where all employees have the opportunity to succeed, and we do not discriminate based on race, religion, color, national origin, gender, sexual orientation, or any other legally protected status.

Chief of Staff, Global Catastrophic Risks

About the team and role

GCR’s leadership team, Emily Oehlsen and George Rosenfeld, oversees strategy and coordination across all GCR programs.

This role will report to George and work closely with both Emily and George to accelerate the GCR team’s progress on its priorities. It will focus on supporting GCR’s central leadership activities, including setting and communicating division-wide strategy, making sure that each of the program teams is on track and set up to succeed, and leading on the identification and execution of priority expansion areas for the division.

This role sits further from our day-to-day grantmaking work than the others in this round but provides direct access to senior leadership and decision-making. We think it has the potential to amplify the impact of the whole GCR team and also provides an excellent platform for those looking to grow in leadership and strategy roles.

Unique responsibilities

  • Compared to the other roles in this round, this opportunity will be especially focused on supporting and improving the decision-making and priorities of senior leadership, and amplifying the impact of the wider division. You will work with Emily and George on many of their top priorities, support and boost their work, act as a thought partner, and own broader initiatives yourself.

  • Example projects:

    • Design and organize a “listening tour” for Emily and George to solicit feedback from field leaders on the GCR team’s strategy, and identify key gaps in the GCR ecosystem that the team should prioritize.

    • Based on discussion with Emily, George, and GCR team leads, write a memo describing how much of the GCR team’s work is betting on short timelines to transformative AI, and what it could do to move in either direction.

    • Lead or oversee the investigation to identify GCR’s next area of expansion, present this recommendation to senior leadership, and potentially spin up a new team to execute this new line of work.

    • Design and coordinate GCR strategy sessions at organizational retreats.

    • Build and execute an outreach strategy for attracting senior candidates to high-priority positions across the GCR teams.

    • Ensure that the Open Philanthropy board is regularly updated about the GCR team’s progress and that key stakeholders across the team receive updates after each board meeting.

    • Develop and implement new approaches for cross-team coordination and communication within the GCR division.

    • Own or provide recommendations on smaller questions and projects that regularly come across Emily and George’s desks.

Unique candidate criteria

You might be a particularly great fit for this role if you:

  • Are excited about reducing global catastrophic risks broadly and are committed to achieving the team’s goals.

  • Are generally familiar with global catastrophic risk and have basic context around areas such as AI safety and biosecurity, but you don’t have technical expertise in any specific focus area.

  • [Ideally] Have previous senior-level experience in a Chief of Staff role or other senior generalist roles such as management or operations leadership.

Other details

  • Location: There are no hard location requirements for this role, though we prefer candidates based in (or willing to relocate to) Washington, D.C., London, or the San Francisco Bay Area — and/or candidates who are able to travel frequently to those locations to facilitate regular in-person work. Candidates will need to be available for most of a working day in the U.S. East Coast time zones, with some overlap where necessary with London and the U.S. West Coast.

    • We’ll support candidates with the costs of relocation to the San Francisco Bay Area or Washington, D.C.

    • We are happy to consider candidates based outside of the U.S. and to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa, and we can’t guarantee visa approval.

  • Compensation: This is a full-time, permanent position. The starting compensation for this role will be either $172,388.73 or $188,255.37 per year depending on the experience of the successful candidate, which would include a base salary of $149,903.24 or $165,255.37 and an unconditional 401(k) grant of $22,485.49 or $23,000.00.

    • These compensation figures assume a remote location; there would be geographic adjustments upwards for candidates based in the San Francisco Bay Area or Washington, D.C.

    • All compensation will be distributed in the form of take-home salary for internationally based hires.

  • Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.

Apply for this role

Chief of Staff, Technical AI Safety

About the team and role

The Technical AI Safety (TAIS) grantmaking program sits under our broader focus area of Navigating Transformative AI and supports technical research aimed at reducing catastrophic risks from advanced AI. The team has grown by 3x over the past year to a size of six people and ~$150 million in annual grantmaking, and we expect it will continue to grow rapidly in the coming years. You can read more about some of the research areas we fund in our recent Request for Proposals.

We are seeking a strategic and highly organized Chief of Staff to report directly to Peter Favaloro and work closely with him to accelerate TAIS’ progress toward its goals.

This role will have significant responsibility in aggressively scaling a new team in a high-priority area for Open Philanthropy — designing systems to support the team’s work, managing stakeholders to generate buy-in on the team’s plans, and owning projects that resolve bottlenecks to the team’s growth.

Unique responsibilities

  • Compared to the other roles in this round, this opportunity will particularly emphasize partnering with Peter to scale and support the TAIS team. You'll both amplify Peter's impact as a team lead by serving as a thought partner and execution accelerator, and ideally also act as a leader yourself — overseeing complex projects, steering recommendations on team structure and trajectory, and representing the team to internal and external stakeholders.

  • Specific responsibilities will be customized to the abilities and fit of a given candidate. Example projects include:

    • Work with Peter to draft the team’s growth strategy. This would involve collecting feedback from colleagues and collaborators, mapping out the merits and drawbacks of different trajectories, recommending refinements to the strategy, and presenting a plan to Open Philanthropy leadership.

    • Own the launch of a new Request For Proposals. This might involve:

      • Setting excellent quality and efficient delivery timelines.

      • Ensuring the content and design will attract the applications we want.

      • Managing stakeholders at all levels across the organization.

      • Coordinating large team member contributions, like drafting the text or running the publicity.

    • Design and execute a strategy to recruit senior technical talent. This would likely include meeting with senior AI researchers outside the organization to understand their interests and constraints, designing a role to appeal to the strongest candidates in that audience, and overseeing the hiring process.

    • Line-manage program operations staff who track and analyze our grantmaking, coordinate communications with applicants and grantees, and own new operational projects.

Unique candidate criteria

You might be a particularly great fit for this role if you:

  • Are excited about the work of the TAIS team and committed to achieving its goals.

  • Are broadly familiar with the AI safety ecosystem. Technical expertise is a positive, but not at all required.

  • [Ideally] Have experience in running and scaling a high-functioning team, including line management, team structuring, goal setting, and overseeing several multi-contributor projects in parallel.

Other details

  • Location: Peter is based in the San Francisco Bay Area, and would strongly prefer candidates who can co-work in person (alternately in San Francisco and Berkeley).

    • We are open to exceptional candidates working remotely, but require that most of their working hours overlap with California business hours (~9 am - 5 pm PT).

    • We’ll support candidates with the costs of relocation to the Bay.

    • We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).

  • Compensation: This is a full-time, permanent position. The starting compensation for this role will be either $186,861.09 or $204,059.77 per year depending on the experience of the successful candidate, which would include a base salary of $163,861.09 or $181,059.77 and an unconditional 401(k) grant of $23,000.00.

    • These compensation figures assume you’d be working from the San Francisco Bay Area; there would be a downward adjustment for candidates based in other locations.

  • Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.

Apply for this role

Chief of Staff, Short Timelines Special Projects

About the team and role

The AI Short Timelines Special Projects team is a new team led by Claire Zabel under our broader focus area of Navigating Transformative AI. It focuses on projects that would be useful if timelines to transformative AI are particularly short, and it arrives before most stakeholders have become high-context on the speed and progress of AI capabilities. The team aims to identify and implement projects we expect to be particularly impactful in those circumstances.

We are seeking a strategic and highly organized Chief of Staff to report directly to Claire and work closely with her to prioritize, spin up, and lead key projects for this team, while supporting her organization, output, and strategy.

Although the team is new, its focus requires the capacity to implement large projects as quickly as possible. As Chief of Staff, you will help it effectively and rapidly prioritize and implement its priorities.

Unique responsibilities

  • Compared to other roles in this round, this opportunity will particularly emphasize operating with a high degree of autonomy to help get various new projects off the ground. That will involve forming independent opinions on poorly scoped questions by speaking with internal and external experts, conducting research, formulating proposals, and owning the work to see those proposals realized.

  • Example projects:

    • Circulate a proposal for a new project with 15 of the top experts on AI strategy and AI futurism, gather their feedback, and redraft an updated proposal based on what you learn.

    • Collaborate with Claire on a decision about which of two possible high-priority projects to prioritize next quarter, and propose a process for getting to the right decision quickly.

    • Identify and help with potentially tractable cruxes and other sources of useful information and feedback.

    • Work with Claire to scope, launch, and manage hiring efforts to help the nascent team scale up rapidly.

Unique candidate criteria

You might be a particularly great fit for this role if you:

  • Can balance nimbleness and efficiency with ensuring that important questions and potential risks are adequately addressed — this is a new team trying to move very quickly on projects where short timelines might matter a great deal.

  • Are good at navigating ambiguity — the projects this team is tackling are likely to be unprecedented and poorly scoped even relative to other GCR team work.

  • Are broadly familiar with the AI safety ecosystem and are interested in AI futurism (deeper AI futurism expertise is a substantial plus).

Other details

  • Location: Claire is based in Berkeley, California, and would strongly prefer candidates who can co-work in person.

    • We are open to candidates working remotely, but require that most of their working hours overlap with Claire’s workday (~10 am - 7 pm PT).

    • We’ll support candidates with the costs of relocation to the San Francisco Bay Area.

    • We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval).

  • Compensation: This is a full-time, permanent position. The starting compensation for this role will be either $186,861.09 or $204,059.77 per year depending on the experience of the successful candidate, which would include a base salary of $163,861.09 or $181,059.77 and an unconditional 401(k) grant of $23,000.00.

    • These compensation figures assume you’d be working from the San Francisco Bay Area; there would be a downward adjustment for candidates based in other locations.

  • Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.

Apply for this role

Associate Program Officer, AI Governance & Policy

About the team and role

The AI Governance and Policy grantmaking program sits under our broader focus area of Navigating Transformative AI, and works to improve society’s preparedness for transformative AI, particularly by mitigating global catastrophic risks. Our 11-person (and growing!) team aims to distribute hundreds of millions of dollars in grants annually over the coming years. You can read more about our priorities in our current Request for Proposals.

We are seeking a highly organized Associate Program Officer to report to Alex Lawsen and support his output and strategy. Alex focuses primarily on topics like information security, model evaluations, and frontier AI safety frameworks. Depending on fit and interest, this role could focus primarily on supporting team management and growth (including hiring and supporting Alex's direct reports), or lean more toward grantmaking development (mapping the grantee ecosystem, identifying gaps, and helping establish new organizations or projects to address them).

Unique responsibilities

  • Compared to the other roles in this round, this opportunity will particularly emphasize working closely with Alex to boost his efficacy and output. You will serve as both a thought partner and an execution accelerator, helping Alex structure his thinking on complex problems, conduct daily prioritization, and manage correspondence with teammates, external advisors, and potential grantees.

  • Example projects:

    • Based on a discussion with Alex, draft a document outlining a new grantmaking workstream within our AI Governance & Policy team, including the case for the workstream, the scope of work, and concrete grantmaking opportunities we would like to pursue.

    • Help Alex identify the 15 best experts to advise on a new subject area, set up meetings with them, join Alex in the meetings, own organization of any follow-ups, and help him synthesize takeaways from across the calls.

    • Serve as a sounding board for Alex on people-related topics, help him improve his management, directly meet with his reports, or spot gaps in the team for potential new hires.

Unique candidate criteria

You might be a particularly great fit for this role if you:

  • Are very well-organized. You will be responsible for handling a lot of small, time-sensitive tasks while keeping track of many details in larger projects. Excellent organizational skills are required to flourish in this role.

  • Are particularly interested in working closely with a principal in a supporting role, and would be happy to remain in a supporting role for at least a couple of years.

  • Are highly self-directed and proactive. You'll need to identify what needs doing before being asked, pull tasks toward yourself, and independently manage your priorities.

  • Are service-minded and comfortable with a support role that involves some repetition; you are motivated by the idea of contributing to what Open Philanthropy needs most, even if it’s not high-profile.

Other details

  • Location: Alex is based in London, U.K., and would prefer candidates based in (or willing to relocate to) the U.K., and ideally those who can co-work in person in London. Candidates will need to be available for most of a working day in either the U.S. East Coast or U.K. time zones.

    • We are open to considering visa sponsorship for work authorization in the U.S. or U.K. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.

  • Compensation: This is a full-time, permanent position. The starting compensation for this role will be $172,388.73 per year, which would include a base salary of $149,903.24 and an unconditional 401(k) grant of $22,485.49.

    • These compensation figures assume a London, U.K. location; all compensation will be distributed in the form of take-home salary for hires based outside the U.S.

  • Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the best candidate can only start later.

Apply for this role

About the application process and timeline

Because there is significant overlap, we strongly encourage qualified candidates to apply to multiple roles via the application form. We will evaluate candidates for all their preferred roles through a single streamlined process until the final evaluation stages, and candidates can update their preferences at any point during the round.

The specific steps and sequence below may vary depending on the role(s) a candidate is considering, but the application process will likely involve:

  1. Our initial application form, consisting of a series of short questions.

  2. A remote work test, which is likely to take several hours. We pay honoraria for time spent completing our work tests.

  3. Initial interview(s), remote by default.

  4. Final interview(s), potentially in person at one of our offices. We will cover any reasonable expenses, including flights and accommodation, for any in-person interviews.

  5. Reference checks.

Note that the application form does not have an autosave function. You may wish to complete your answers in a separate document to avoid losing your progress. Please note that we read all of these application answers and consider them closely when evaluating applicants.

We expect to make offers in late January or early February. If you need to hear back from us sooner (e.g. if you’re part of another hiring process with similar timelines), we strongly encourage you to let us know.

Please apply by 11.59 p.m. Pacific Time on Thursday, November 20 to be considered.

We may prioritize candidates who are able to move through the process sooner.

If you need assistance or an accommodation due to a disability or have any other questions about applying, please contact jobs@openphilanthropy.org.

Note: U.S.-based program staff are typically employed by Open Philanthropy Project LLC, which is not a 501(c)(3) tax-exempt organization. As such, this role is unlikely to be eligible for public service loan forgiveness programs.

AI/ML notice: Open Philanthropy may use artificial intelligence (AI) and machine learning (ML) technologies, including natural language processing and predictive analytics, to assist in the initial screening of employment applications. These AI/ML tools assess applications against characteristics and qualifications relevant to the job requisition. These tools are designed to help identify potentially qualified candidates, but they do not make automated hiring decisions. The AI/ML-generated assessments are one of several factors considered in the hiring process. Our human recruiting team will thoroughly evaluate your skills and qualifications to determine your suitability for the role.
If you prefer not to have your application assessed using AI/ML features, you may opt out by reaching out to jobs@openphilanthropy.org and letting us know. Opting out will not negatively impact your application, which will be reviewed manually by our team.