The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability
The Integrity pillar within Applied Foundations is responsible for the scaled systems that help identify and respond to bad actors and harm on OpenAI’s platforms. As the systems that address some of our most severe usage harms become more mature, we’re adding data scientists to help us measure robustly the prevalence of these problems and the quality of our response to them.
About the Role
We are looking for experienced trust and safety data scientists to help us improve, productionise and monitor measurement for complex, actor- and sometimes network-level harms. A data scientist in this role will own measurement and metrics across several established harm verticals, including estimating prevalence for on-platform (and sometimes off-platform!) harm, and analyses to identify gaps and opportunities in our responses.
This role is based out of our London office and may involve resolving urgent escalations outside of normal work hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise-disturbing material.
In this role, you will:
own measurement and quantitative analysis for a group of severe, actor- and network-based usage harm verticals
develop and implement AI-first methods for prevalence measurement and other productionised safety metrics, which may necessarily include off-platform indicators or other non-standard datasets
build metrics that can be used for goaling or A/B tests when prevalence or other top line metrics are not suitable
own dashboards and metrics reporting for harm verticals
conduct analyses and generate insights that inform improvements to review, detection, or enforcement, and that influence roadmaps
optimise LLM prompts for the purpose of measurement
collaborate w/ other safety teams to understand key safety concerns and create relevant policies that will support safety needs
provide metrics for leadership and external reporting
develop automation to scale yourself, leveraging our agentic products
You might thrive in this role if you:
are a senior DS with trust and safety experience that can drive measurement direction
have deep statistics skills, specifically around sampling methods and prevalence estimation of complicated problem areas (ideally activity- rather than content-based)
have experience working with severe and sensitive harm areas like child safety or violence
are an excellent communicator, and have strong cross-functional collaboration skills
are capable in data programming languages (R or python, SQL)
(ideally) have experience with AI harms or leveraging AI for measurement
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.