NVIDIA is the platform upon which every new AI‑powered application is built. We are seeking a Sr. Software Engineer – Inference Platform Infrastructure to help build and automate the foundations that keep NVIDIA’s inference services running smoothly—so they are reliable, scalable, and easy to operate across thousands of GPUs. This is a hands‑on role for an engineer who loves taming complex distributed systems with code; simple, automated workflows: scheduling and placement logic, GPU health tracking, safe rollouts, fast recovery, and capacity efficiency—delivered through software with the goal of minimizing manual operations.
What you'll be doing:
Build automation that makes inference at scale easy to operate: provisioning, configuration, upgrades, rollbacks, and routine maintenance—optimized for repeatability and safety.
Create and evolve deployment patterns for inference workloads on Kubernetes: rollouts, autoscaling, multi‑cluster patterns, GPU scheduling/isolation, and safe upgrade strategies.
Own platform reliability outcomes through software: define and improve SLIs/SLOs, error budgets, alert quality, and automated remediation for common failure modes.
Owning and operating a large fleet of NVIDIA GPU and Datacenter hardware from pre-release to production.
What we need to see:
Strong software engineering skills; ability to build platforms and systems that our teams rely on.
5+ years building and operating production distributed systems with strong ownership and a track record of improving reliability and eliminating toil.
Proven expertise in cloud-native platforms: Kubernetes, containers, service networking, configuration management, and modern CI/CD.
Deep experience with infrastructure‑as‑code and automation-first operations (e.g., GitOps workflows, policy enforcement, fleet management patterns).
Excellent communication and collaboration skills; ability to lead cross‑functional efforts and drive improvements to completion.
BS/MS in Computer Science, Computer Engineering, or related field or equivalent experience.
Ways to stand out from the crowd:
Direct experience in operating inference serving at scale at scale (Triton, TensorRT‑LLM, KServe/Ray Serve, etc.).
Built scheduling, placement, or quota systems (priority queues, fairness, admission control, rate limiting) for Kubernetes.
Built fleet health systems: telemetry pipelines, automated quarantine/drain, and hardware/software failure triage automation.
We are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward‑thinking and creative people in the world working for us. If you're creative and autonomous with a real passion for technology, we want to hear from you.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until February 21, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.