About the role
We are looking for software engineers to help build the foundational pieces for safety, oversight and intervention mechanisms of our AI systems. As a software engineer on the Safeguards team, you will work to monitor models, prevent misuse, and ensure user well-being. This role will focus on building systems to detect unwanted model behaviors and prevent disallowed use of models. You will apply your technical skills to uphold our principles of safety, transparency, and oversight.
Responsibilities:
- Develop the foundational systems which power Safeguards, including infrastructure for data storage and management, metric and evaluation systems, and tooling for human and agentic review.
- Ensure the day-to-day running of Safeguards systems and hold a high operational bar which serves both safety and customers while reducing the amount of human intervention and oversight required.
- Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale
You may be a good fit if you have:
- Bachelor’s degree in Computer Science, Software Engineering or comparable experience
- 4-10+ years of experience in a software engineering position
- Proficiency in Python
- Ability to work across the stack
- Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders
Strong candidates may also:
- Have experience building trust and safety, anti-spam, fraud or abuse detection and mitigation mechanisms and interventions for AI/ML systems
- Have experience building metrics and measurement systems or data and privacy management systems
- Have worked closely with operational teams to build custom internal tooling
- Be proficient in TypeScript or Rust
- Have experience with Claude Code or similar agentic coding tools
Deadline to apply: None. Applications will be reviewed on a rolling basis.