We are seeking a Deep Learning Research Engineer to join our team and help develop the next generation of Large Language Model (LLM) inference algorithms. You will work on technologies that directly enhance NVIDIA's software, making the latest LLMs more efficient and accessible to users worldwide. This role is designed for someone with strong research foundations who also wants to build software that runs and scales into production systems across the world.
By joining us, you will be part of a strategic effort to establish NVIDIA as the definitive platform for high-performance LLM inference. The work requires a combination of research taste, experimental rigor, and engineering ownership: you will explore new ideas, run rigorous evaluations, and help transform successful approaches into tools and implementations.
What you'll be doing:
Develop and improve benchmarks, profiling workflows, and evaluation pipelines that make inference performance measurable and reproducible.
Design and lead the development of experimental frameworks that enable rapid, reproducible evaluation of algorithmic tradeoffs across quality, latency, throughput, and more.
Prototype new algorithms for LLM inference to advance the state of the art in both low-latency and high-throughput scenarios, translating them into practical software solutions that directly impact NVIDIA's products and customers
Trace and profile the performance of new algorithms on NVIDIA’s latest hardware, identifying bottlenecks and opportunities for algorithmic optimizations.
Collaborate with internal research, engineering, and product teams across the globe to drive the development of advanced inference technologies.
Stay ahead of research in LLM inference, efficient generation, model architecture, inference engines, and translate relevant advances into practical solutions.
What we need to see:
MSc in Computer Science, Electrical Engineering, or a closely related field; or equivalent experience in an industrial research role.
At least 5 years of proven experience in applied research, research engineering, or algorithm engineering.
Excellent software engineering skills, particularly in Python and deep learning frameworks like PyTorch.
Proven experience with High-Performance Computing (HPC) environments, including training or running inference on large-scale GPU clusters (tens to hundreds of GPUs).
Interest in the systems side of deep learning, including inference engines, benchmarking, profiling, GPU efficiency, memory behavior, and deployment constraints.
A strong problem-solving mentality and a proactive attitude, driven by the ambition to deliver solutions with real-world impact.
Ways to stand out from the crowd:
At least one publication in a top-tier AI/ML conference (e.g., NeurIPS, ICLR, ICML).
Deep understanding of LLM architectures coupled with hands-on experience in training large-scale models.
Hands-on research experience in LLM inference optimization algorithms such as speculative decoding or parallelization strategies.
Deep familiarity and experience with popular LLM inference frameworks (e.g., vLLM, TensorRT-LLM).
We are an equal-opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.