NVIDIA is the platform for every new AI-powered application. We seek a senior engineer to own and evolve the core NIM Platform SDK and microservice framework. This framework powers NVIDIA Inferencing Microservices (NIM). The ideal candidate has deep systems engineering skills and a passion for building foundational platform libraries. These libraries support multiple NIM modalities in delivering production-ready AI inference at scale.
This is a hands-on, deeply technical role for someone who thrives on building core platforms that scale. The role involves solving deep software engineering challenges. These include high-performance systems programming, multi-cloud abstractions, and API framework development. The role requires collaboration across NIM product teams and delivering production-grade software supporting NVIDIA and the wider AI ecosystem.
What you'll be doing:
Develop and advance the inference microservice framework: OpenAI-compatible API endpoints, inference backend integrations (vLLM, SGLang, TensorRT-LLM, Dynamo), middleware, observability instrumentation, and production hardening across cloud, on-prem, and Kubernetes environments.
Architect significant new features in open-source codebases, shepherding them through project acceptance and into production.
Build and optimize high-performance model download and caching pipelines across multiple cloud storage backends (NGC, HuggingFace, S3, GCS) - parallel transfers, integrity verification, and seamless multi-cloud operability.
Implement the model profile and manifest system that ensures NIMs are optimized for every NVIDIA GPU platform - profile selection, validation, and multi-GPU configuration.
Develop and refine cloud microservice patterns - service discovery, health checking, graceful degradation, API gateway integration, and end-to-end request lifecycle management - to ensure NIMs operate reliably at scale in diverse cloud deployment environments.
Be a role model for high-quality code across Python, Rust, and C/C++, and model guidelines in test-driven development, agentic AI-assisted development, code review, and cross-team collaboration.
Mentor teammates and establish high engineering standards for container quality, security, and operability.
What we need to see:
BS or MS in Computer Science, Computer Engineering, or related field (or equivalent experience).
8+ years of demonstrated experience developing performant microservice, cloud software and/or platform infrastructure roles.
Deep technical expertise in cloud-native microservice architecture, including service mesh, API gateways, load balancing, and distributed system build patterns.
Expertise in high-performance data pipelines with parallel I/O, caching strategies, and integrity verification across distributed storage systems.
Solid understanding of containerized application delivery using technologies such as Docker, Kubernetes, and Helm.
Understanding of application security principles, including secure coding practices, vulnerability mitigation, secrets management, and supply chain integrity for containerized environments.
Strong problem-solving skills grounded in first-principles reasoning and critical analysis.
Excellent programming skills in Python and Rust, with strong foundations in algorithms, development patterns, and software engineering principles.
Ways to stand out from the crowd:
Direct involvement in open-source inference backends such as vLLM, TRTLLM, or SGLang.
Direct involvement in disaggregated serving frameworks like NVIDIA Dynamo.
Experience building and operating production microservices at scale.
Deep knowledge of multi-cloud deployment strategies across AWS, GCP, Azure, and OCI.
Experience operating in regulated, air-gapped, or disconnected environments where strict security and compliance controls are required.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 4, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.