NVIDIA has continuously reinvented itself over two decades. NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. This is our life’s work — to amplify human imagination and intelligence.
AI becomes more and more important in self-driving car. NVIDIA is at the forefront of the AI-City and self-driving revolution and providing powerful solutions for them. All these solutions are based on GPU-accelerated libraries, such as CUDA and TensorRT, etc. Now, we are now looking for an GPU computing engineer based in Shanghai.
What you’ll be doing:
Analyze Deep Learning models and investigate TensorRT stability and performance issues reported by customers or internal teams.
Work with internationally distributed team with remote locations in US, APAC and India for CUDA and TensorRT developing.
Extract the feature requirement or FAQ from the analysis and development and generate the documents.
What we need to see:
Bachelor or equivalent experience of Computer Science or Electrical Engineering is required and Master Degree is preferred.
3-5+ years of related work.
Strong programming skills in C and C++ and python.
Have knowledge about the popular inference network and layers.
Experience working with deep learning frameworks like Torch and Pytorch.
Strong written and verbal communications in both English and Mandarin.
Ability to work well in a diverse team environment as well as with cross site peers.
Strong customer communication skills, powerfully motivated to provide highly responsive support as needed.
Ways to stand out from the crowd:
Candidates is very good at Pytorch
Strong customer communication skills, powerfully motivated to provide highly responsive support as needed.
#deeplearning