Computer graphics is moving into a neural-first era. Neural reconstruction techniques like NeRFs and Gaussian Splatting have made it possible to capture real-world scenes with sensor-realistic fidelity; the next step is building systems that can generate, extend, transform, and simulate worlds interactively. NVIDIA is building that future through OmniDreams and AlpaSim. OmniDreams is our real-time diffusion world-model effort. AlpaSim is the simulation system that connects policies, renderers, traffic, physics, and evaluation into a usable closed-loop platform. Together, they point toward a new rendering stack where reconstructed scenes, generated worlds, and simulation enable that vision, “dream it, drive it.”
We are looking for an outstanding Research Engineer to help turn brand-new research into working technology. You will work closely with research teams while staying deeply hands-on: prototyping new model and rendering ideas, making sharp design trade-offs, optimizing performance, integrating with downstream stacks, and building systems that can be evaluated, released, and improved over time. The right person sees research and engineering as the same work at different altitudes, and is motivated by the hard implementation details that make a promising idea usable in practice.
What You'll Be Doing:
Partner closely with research teams to shape neural graphics and world-simulation techniques while the field is still being invented
Build prototypes and production-quality implementations across neural reconstruction, diffusion/world models, rendering, simulation, and real-time inference
Connect generated worlds, reconstructed scenes, controllable simulation state, and downstream evaluation loops into coherent end-to-end systems
Drive performance, latency, memory, temporal consistency, controllability, quality, and developer usability in real-time pipelines
Make principled technical trade-offs across models, renderers, GPU systems, data pipelines, and product constraints
Influence NVIDIA's broader neural graphics direction by grounding frontier ideas in systems that run, scale, and teach us what matters
What We Need to See:
Background in Computer Science, Computer Graphics, Machine Learning, Electrical Engineering, or equivalent practical experience
8+ years of technical experience, or an equivalent record of exceptional technical contribution
Deep hands-on expertise in one or more of: neural rendering, world models, diffusion/video generation, neural reconstruction, computer vision, simulation, or ML systems
Strong implementation skill in real software systems, including debugging, optimization, integration, and performance-aware design
Experience taking research concepts through the lifecycle from prototype to robust, evaluated, production-quality technology
Ability to lead ambiguous, cross-team technical work and communicate the trade-offs behind architectural decisions
Ways to Stand Out from the crowd:
Hands-on work with Gaussian Splatting, NeRFs, differentiable rendering, diffusion models, video generation, controllable world models, or neural content generation
Experience building real-time graphics, simulation, robotics, AV, physical AI, game engine, or media creation systems
Strong point of view on how AI will change entertainment, graphics, media, and interactive content creation
Experience with CUDA, Slang, graphics APIs, real-time engines, GPU performance, or ML inference optimization
Papers, patents, open-source leadership, shipped systems, or field-level technical impact in graphics, AI, simulation, or ML systems