We are building the next generation of GPU‑accelerated recommendation tools, redefining how models are trained and deployed at scale. Our mission is to make developing and productizing GPU‑based recommender systems as seamless, efficient, and powerful as possible. As part of this effort, you will join a world‑class team of ML, HPC, and Software Engineers focused on maximizing training and inference speed while enabling effortless scalability.
What You’ll Be Doing:
Profile, analyze, and optimize GPU‑accelerated code to improve training and inference performance for large‑scale recommender systems.
Design, implement, and maintain high‑performance C++/CUDA components within our core recommendation framework.
Develop and execute tests (unit, integration, and performance) to ensure numerical correctness, stability, and regression prevention in GPU workloads.
Collaborate closely with CUDA and ML engineers to interpret profiling results, refine designs, and implement optimization strategies.
Design and optimize high‑throughput data flows between GPUs, RDMA‑capable NICs, and NVMe SSDs using technologies such as GPUDirect RDMA and GPUDirect Storage.
What We Need to See:
Bachelor’s or Master’s degree in Computer Science, Software Engineering, Mathematics, or a related technical field.
3+ years of experience in C++, CUDA, and Python development on Linux systems.
Solid understanding of numerical computing, floating‑point behavior, and GPU performance profiling.
Proven ability to diagnose and optimize computational pipelines using profiling tools such as Nsight Systems or nvprof.
Excellent communication skills and the ability to work effectively across cross‑functional engineering teams.
Ways to Stand Out from the Crowd:
Relevant experience building or optimizing large‑scale recommender systems or production ML workloads on GPUs.Familiarity with deep learning frameworks and their GPU backends (e.g., PyTorch, TensorFlow, JAX).
Hands‑on knowledge of distributed or multi‑GPU training setups, including NCCL or MPI‑based communication.
Experience with RDMA (verbs, UCX, or CUDA‑aware MPI) and high‑speed data movement between compute and storage.
Knowledge of high‑performance storage pipelines using NVMe SSDs, GPUDirect Storage, or NVMe‑oF.
NVIDIA offers highly competitive salaries, comprehensive benefits, and the opportunity to work with some of the industry's most forward‑thinking engineers. You’ll tackle real‑world challenges at massive scale in fields like Deep Learning, AI, Autonomous Systems, and Supercomputing. If you’re a creative, autonomous computer scientist with a passion for GPU performance and high‑performance systems design, we would love to hear from you.
#deeplearning