Deep Learning Library Performance Software Engineer Intern

Location: Santa Clara, California, United States
Type: Full-time
Posted: 09.AUG.2021
< >


Do you enjoy tuning parallel algorithms and analyzing their performance? If so, we want to hear from you!As a deep learning library performa...


Do you enjoy tuning parallel algorithms and analyzing their performance? If so, we want to hear from you!As a deep learning library performance software engineer, you will be developing optimized code to accelerate linear algebra and deep learning operations on NVIDIA GPUs. The team delivers high-performance code to NVIDIA's cuDNN, cuBLAS, and TensorRT libraries to accelerate deep learning models. The team is proud to play an integral part in enabling the breakthroughs in domains such as image classification, speech recognition, and natural language processing. Join the team that is building the underlying software used across the world to power the revolution in artificial intelligence!We're always striving for peak GPU efficiency on current and future-generation GPUs. To get a sense of the code we write, check out our CUTLASS open-source project showcasing performant matrix multiply on NVIDIA's Tensor Cores with CUDA.What you'll be doing:Writing highly tuned compute kernels, mostly in C++ CUDA, to perform core deep learning operations (e.g. matrix multiplies, convolutions, normalizations)Following general software engineering best practices including support for regression testing and CI/CD flowsCollaborating with teams across NVIDIA: 1) CUDA compiler team on generating optimal assembly code, 2) Deep learning training and inference performance teams on which layers require optimization, 3) Hardware and architecture teams on the programming model for new deep learning hardware featuresWhat we need to see:Pursuing a BS, MS, or PhD degree in Computer Science, Computer Engineering, Applied Math, or related fieldDemonstrated strong C++ programming and software design skills, including debugging, performance analysis, and test designExperience with performance-oriented parallel programming, even if it's not on GPUs (e.g. with OpenMP or pthreads)Solid understanding of computer architecture and some experience with assembly programmingWays to stand out from the crowd:Tuning BLAS or deep learning library kernel codeCUDA/OpenCL GPU programmingNumerical methods and linear algebraLLVM, TVM tensor expressions, or TensorFlow MLIRWhile deep learning experience at the framework or model level is certainly helpful and valued to give context on how our math libraries and kernels are used, for much of what we do, the knowledge is not required. This specific position primarily deals with code lower in the deep learning software stack, right down to the GPU HW.NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most brilliant and talented people in the world working for us. If you're creative, autonomous, and love a challenge, we want to hear from you. Join our deep learning library team and help build the real-time, cost-effective computing platform driving our success in this exciting and quickly growing field.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.#deeplearning

Apply Now


Loader2 Processing ...