Senior Manager, AI Safety and Security Policy
Senior Manager, AI Safety and Security Policy
Full-time FAS staff
Washington, D.C. (hybrid)
Apply by December 15, 2025
To Sum It Up…
What’s the “elevator pitch” for the role?
FAS is building the capacity to govern increasingly advanced AI systems in the public interest. As Senior Manager, AI Safety and Security Policy, you will drive ambitious efforts to turn cutting-edge technical insights into real policy impact—shaping how the U.S. anticipates and manages the challenges of frontier AI.
You’ll identify and advance ideas that can make rapid AI progress safer for everyone—crafting proposals, bridging divides, and helping decision-makers act before capabilities outpace oversight. This is a role for a policy entrepreneur: someone who sees opportunities for change and moves to realize them.
You’ll work closely with researchers, policymakers, civil society, and industry experts across diverse viewpoints—from AI safety and security to fairness and innovation, supported by cross-functional FAS teams. Success means not only producing analysis, but ensuring that pluralistic, evidence-based approaches to frontier AI governance take root in the places where decisions are made.
If you’re motivated by the challenge of bridging technical and policy worlds to shape how society mitigates catastrophic AI risks—and to ensure transformative AI serves the public good—this is a chance to lead from the front.
This position will report to the Associate Director of AI and Emerging Technology Policy.
Skills and Expertise: Must-Haves
What skills do you need to show proficiency (or higher) in order to be a strong candidate?
- 7+ years of relevant experience across think tanks, government, academia, or industry.
- Experience with one or more areas of AI safety and security policy (e.g., frontier AI safety, AI and chemical, biological, nuclear, and radiological (CBRN) weapons, AI and cybersecurity, red-teaming, dangerous capabilities evaluations).
- Aptitude for policy entrepreneurship, including an ability to identify policy windows and match interventions (briefings, convenings, public comments, etc.) to moments of maximal impact.
- Proven ability to craft precise, persuasive, and implementable policy proposals and communicate complex ideas clearly to technical and non-technical audiences alike.
- Ability to engage critically and carefully with technical AI work—understanding research claims, methods, and limitations—and translate these into policy-relevant insights.
- Capacity to engage constructively across differing schools of thought in AI policy (e.g., safety, security, fairness, and innovation) while maintaining analytical rigor and respect for diverse evidence bases.
- Track record of working across teams and mentoring or guiding colleagues in a mission-driven, fast-moving environment.
Skills and Expertise: Preferred
- Direct government experience, particularly in areas relevant to AI policy.
- Experience briefing senior decision‑makers.
- Experience managing people and complex projects; clear, empathetic communication and stakeholder coordination.
- Experience convening cross‑sector stakeholders and building consensus, particularly around challenging topics.
- Hands-on technical experience in AI/ML, for example developing or deploying frontier AI systems.
- Research or publication record, especially on AI policy (e.g., reports, academic articles, public commentary).
- Professional networks spanning AI research, policy, and advocacy communities, particularly AI safety and security.
- Advanced degree (e.g., MS, JD, MPP/MPA, or PhD), especially with a focus on AI governance, safety, or security.
Key Responsibilities:
The following is an overview of the main responsibilities of the successful candidate. Please note that other tasks may be required, and responsibilities will vary over time.
Policy Analysis and Development
- Identify policy windows, including areas where proactive, targeted interventions (e.g., public comments, convenings, briefings) can support tangible policy change.
- Write and publish rigorous, timely analysis that anticipates emerging AI risks and opportunities and is useful to policymakers.
- Develop well-scoped research projects on AI safety and security policy where you see potential windows to drive policy change.
- Engage with external experts, including technical AI experts, to conduct relevant research.
- Respond to requests for technical assistance from policymakers, including through briefings and responses to requests for information.
- Collaborate on AI policy development with our Day One community, including through our AI Safety Policy Entrepreneurship Fellowship.
- Collaborate with internal teams on relevant policy projects (e.g., AI and nuclear weapons).
Project and Program Management
- Manage and mentor staff with regular, timely, and constructive feedback. Contribute to personnel development and foster a curious, inclusive, and ambitious team culture.
- Manage project deadlines and deliverables, ensuring timeliness and high quality. Collaborate across other internal teams to ensure alignment and execution on relevant AI projects. Attend relevant internal meetings.
- Oversee future iterations of FAS’s AI Safety Policy Entrepreneurship Fellowship in collaboration with other FAS staff.
- Assist with the preparation and management of budgets in collaboration with other FAS staff.
- Participate in fundraising efforts as appropriate and draft reports for relevant funders.
- Support the Associate Director of AI and Emerging Technology Policy in managing the AI portfolio at FAS.
Policy Engagement & Convening
- Regularly engage with policymakers, civil society, technical AI experts, and a range of AI policy communities to share and refine ideas.
- Represent FAS at relevant external events, including travel as needed.
- Serve as an expert commentator for journalists, via social media, and through publications in the popular press.
- Design and manage relevant events (workshops, briefings, convenings, etc.), supported by other FAS staff, including formulating event agendas, themes, and speakers, and bringing a diverse range of ideas into dialogue.
Why FAS?
Does FAS sound like an organization that you would be energized to join? Is it aligned to your values?
The Federation of American Scientists (FAS) takes seriously our role as a beacon and voice for the science community. FAS has a rich history: after the devastating bombings of Hiroshima and Nagasaki in 1945, a group of atomic researchers—deeply concerned about the use of science for malice—created an organization committed to using science and technology to benefit humanity.
The group they created—the Federation of Atomic Scientists—soon became the Federation of American Scientists in recognition of the hundreds of scientists across diverse disciplines who joined together to speak with one voice for the betterment of the world.
Today, we are a group of entrepreneurial, intrepid changemakers, forging a better future for all through the nexus of science, technology, and talent. We value fairness, inclusion, and transparency, and are focused on being impact-driven and growth-oriented as a force for good in the world.
Our previous AI policy work has included:
- Developing AI safety policy proposals that have shaped government policy.
- Analyzing AI companies’ preparedness frameworks for the National Institute of Standards and Technology.
- Submitting recommendations to inform the administration’s AI Action Plan (see our response to the Action Plan here).
- Hosting private convenings, including with administration officials and members of Congress, at the intersection of AI and various global risks, including cybersecurity, biosecurity, nuclear weapons.
- Publishing policy memos at the intersection of AI and energy in collaboration with leading researchers.
- Working with scientists and policymakers to share ideas on how AI can accelerate scientific research.
- Creating policy proposals for AI specifically targeted at Congress, and directly briefing these ideas to staffers on Capitol Hill.
- Launching an ongoing policy development sprint to promote fair and trustworthy AI.
Work Environment
This position will be a hybrid role, meaning generally two to three days per week on-site at our offices in Washington, D.C., and two to three days per week remote depending on the needs of the organization.
Salary Range
Benefits
FAS offers a competitive benefits and retirement package for employees. Details will be provided to you during the interview process.
FAS Hiring Statement
Don’t check off every box? Apply anyway! Studies have shown that women and people of color are less likely to apply for jobs unless they meet every listed qualification. At FAS, we are dedicated to building a diverse and inclusive workplace, and developing new voices. If you’re excited about this role but your past experience doesn’t align perfectly, we encourage you to apply anyway—you might just be the right candidate.
The Federation of American Scientists is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state, and local laws. The Federation of American Scientists prohibits discriminating against employees and job applicants who inquire about, discuss, or disclose the compensation of the employee or applicant or another employee or applicant. Employment is contingent on successful verification of eligibility to work in the United States. Please note that we are unable to offer employment sponsorship for this role.
PLEASE NOTE: we recommend that applicants complete their answers to the application questions in a separate document and paste them in, as the portal will not save their application progress until submission.