Senior Software Engineer, LLM Inference
Excelero Storage
NVIDIA has continuously reinvented itself over two decades. NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
This is our life’s work — to amplify human imagination and intelligence. AI becomes more and more important in AI-City and self-driving car. NVIDIA is at the forefront of the AI-City and self-driving revolution and providing powerful solutions for them. All these solutions are based on GPU-accelerated libraries, such as CUDA, cuDNN and TensorRT, etc. Now, we are now looking for an CPU computing engineer based in Shanghai.
What you’ll be doing:
Craft and develop robust inferencing software that can be scaled to multiple platforms for functionality and performance
Performance analysis, optimization and tuning
Closely follow academic developments in the field of artificial intelligence and feature update TensorRT
Collaborate across the company to guide the direction of machine learning inferencing, working with software, research and product teams
What we need to see:
Masters or higher degree in Computer Engineering, Computer Science, Applied Mathematics or related computing focused degree (or equivalent experience)
3+ years of relevant software development experience.
Excellent C/C++ programming and software design skills, including debugging, performance analysis, and test design.
Strong curiosity about artificial intelligence, awareness of the latest developments in deep learning like LLMs, generative models
Experience working with deep learning frameworks like PyTorch
Proactive and able to work without supervision
Excellent written and oral communication skills in English
Strong customer communication skills, powerfully motivated to provide highly responsive support as needed