NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. Our technology powers everything from generative AI to autonomous systems, and we continue to shape the future of computing through innovation and collaboration. Within this mission, our team, Managed AI Research Superclusters (MARS), builds and scales the infrastructure, platforms, and tools that enable researchers and engineers to develop the next generation of AI/ML systems. By joining us, you’ll help design solutions that power some of the world’s most advanced computing workloads.
As a member of the Scheduling team, you will participate in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek engineers with deep technical expertise to identify architectural directions and new approaches for AI workload scheduling to serve many simultaneous and large multi-node GPU workloads with complex requirements and dependencies. This role offers you an excellent opportunity to deliver production grade solutions, get hands on with ground-breaking technology, and work closely with technical leaders solving some of the biggest challenges in machine learning, cloud computing, and system co-design.
What you'll be doing:
Design and develop new scheduling features and add-on services to improve GPU compute clusters across many dimensions, such as resource usage fairness, GPU occupancy, GPU waste, application resilience, application performance and power usage.
Design and develop batch workload management and orchestration services
Provide support to staff and end users to resolve batch scheduler issues
Build and improve our ecosystem around GPU-accelerated computing
Performance analysis and optimizations of deep learning workflows
Develop large scale automation solutions
Root cause analysis and suggest corrective action for problems large and small scales
Finding and fixing problems before they occur
What we need to see:
Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
5+ years of work experience
Strong understanding of batch scheduling, preferably with experience in schedulers such as SLURM or K8s batch schedulers (Kueue, Volcano, etc.)
Significant experience in systems programming languages such as C/C++ & Go as well as scripting languages such as Python and bash
Established experience in Linux operating system, environment and tools
Experience analyzing and tuning performance for a variety of AI workloads
In-depth understating of container technologies like Docker, Singularity, Podman
Flexibility/adaptability for working in a dynamic environment with different frameworks and requirements
Excellent communication, interpersonal and customer collaboration skills
Ways to stand out from the crowd:
Knowledge in High-performance computing
Open Source Software Contribution
Experience with deep learning frameworks like PyTorch and TensorFlow
Passionate about SW development processes
You will also be eligible for equity and benefits.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.