In late 2024, we launched Cerebras Inference, the fastest Generative AI inference service in the world, over 10 times faster than GPU-based hyperscale cloud inference. Since launch, we’ve scaled to meet the surging demand from AI labs, enterprises, and a thriving developer community.
In October 2025, we announced our series G funding, raising $1.1 billion USD to accelerate the expansion of our products and services to meet global AI demand.
About the team
The Cerebras Inference team’s mission is to deliver the world’s most performant, secure, and reliable enterprise-grade AI service. We build and operate large-scale distributed systems that power AI inference at unprecedented speed and efficiency. Join us to help scale inference and accelerate AI.
About the role
We’re looking for a hands-on Reliability Tech Lead (IC) to own the mission of making Cerebras Inference the most reliable AI service in the world. You will drive reliability strategy and execution across our inference stack, from client SDKs and public-cloud multi-region deployments to wafer-scale systems in specialized data centers.
In this role, you will define SLOs and incident-response frameworks, design and implement reliability mechanisms at scale, and partner across hundreds of engineers to ensure our service meets world-class reliability standards.
If you are passionate about building and operating massive-scale, low-latency, high-reliability distributed systems, we want to hear from you.
Responsibilities:
- Define and drive reliability strategy: establish SLOs and ensure alignment across engineering.
- Design and implement reliability mechanisms: build and evolve systems for fault detection, graceful degradation, failover, throttling, and recovery across multiple regions and data centers.
- Lead large-scale incident management: own postmortems, root-cause analysis, and prevention loops for reliability-related incidents.
- Architect for reliability and observability: influence system design for redundancy, durability, and debuggability.
- Develop reliability tooling: create internal tools and frameworks for chaos testing, load simulation, and distributed fault injection.
- Collaborate broadly: work across software, infrastructure, and hardware teams to ensure reliability is embedded into every layer of our inference service.
- Monitor and communicate reliability metrics: build dashboards and alerts that measure service health and provide actionable insights.
- Mentor and influence: guide engineers and set best practices for designing, testing, and operating reliable large-scale systems.
Skills & Qualifications:
- Bachelor's or master's degree in computer science or related field.
- 7+ years of experience in backend, infrastructure, or reliability engineering for large-scale distributed systems.
- Strong programming skills in at least one popular backend programming language such as Python, C++, Go, or Rust.
- Deep and hard-earned experience of reliability principles: SLO/SLI/SLA design, incident response, and postmortem culture.
- Excellent communication and cross-functional leadership skills.
- Bonus: prior experience building large-scale AI infrastructure systems.