1. Home
  2. Jobs
  3. CA
  4. Ontario
  5. Toronto
  6. AI/ML Engineering
  7. Senior Systems Software Engineer - Deep Learning Solutions
NV

Senior Systems Software Engineer - Deep Learning Solutions

NVIDIA
Posted onFeb 28, 2026
LocationToronto, Ontario, Canada (On-site)
Employment typeFull-time
SalaryC$225k – C$275k Yearly

NVIDIA is a global leader in physical AI, powering self-driving cars, humanoid robots, intelligent environments, and medical devices. Our software platforms are central to this mission. We help innovators build products that save lives, enhance working conditions, and improve living standards globally! We are hiring a Senior Engineer to become part of our team as a technical authority in deep learning inference optimization for autonomous vehicles and robotics on edge hardware. This role requires a hands-on expert who can inspect model architectures down to the operator level. They will uncover performance bottlenecks through kernel traces and evaluate how modern architectures (transformers, vision-language models, diffusion/flow matching, state space models) function on GPU and SOC. The work performed directly advances how autonomous vehicles and robots sense and respond in the real world, with instant impact!

This group addresses some of the toughest optimization problems in the industry, operating at the crossroads of innovative model architectures, compiler technology, and embedded hardware. We work in close partnership with automotive OEMs, robotics collaborators, and internal hardware teams to expand the limits of what can be achieved on edge devices.

What you'll be doing:

  • Address customer and partner optimization challenges: Engage directly with prominent automotive OEMs and robotics associates to analyze, debug, and improve their deep learning models on NVIDIA platforms. We emphasize delivering solutions rather than just recommendations.

  • Own performance benchmarking: Drive efforts to achieve leading results on MLPerf Edge and industry benchmarks, as well as closed-source engagements with key partners. Define methodology, ensure reproducibility, and turn results into actionable optimization priorities.

  • Evaluate emerging model architectures: Analyze new DL architectures, including vision encoders, multi-modal VLMs, hybrid SSM-Transformer backbones, diffusion/flow matching decoders, and multi-camera tokenizers, for compilation feasibility, memory footprint, and latency on target SOCs.

  • Collaborate across teams: Partner with our compiler, runtime, and hardware teams to connect model-level insight with platform capabilities.

  • Contribute to build reviews and help develop internal roadmap priorities based on real customer workload patterns.

  • Represent NVIDIA externally: Share our deep learning optimization expertise at conferences, webinars, and partner events. Help elevate the broader team by bringing back insights and establishing guidelines.

  • Deliver TensorRT and compiler-stack solutions for edge: Create and deploy inference solutions on Jetson, DRIVE, and GPU + ARM platforms for AV and robotics workloads. Develop Proofs of Readiness (PORs) and work closely with our compiler team on Torch-TRT, MLIR-TRT, and related frameworks to bridge performance gaps.

What we need to see:

  • Master’s degree or equivalent experience in Computer Science, Electrical Engineering, or a related field.

  • 12 + years of industry experience with over 8 years in deep learning model optimization, inference engineering, or neural network compilation. You need to be adept at interpreting and reasoning about model architectures at the operator/kernel level, not only operating them.

  • Over 5 years of validated expertise in embedded/edge software, with experience delivering production inference solutions within power-limited, latency-sensitive deployment environments.

  • Deep knowledge of current DL architectures: transformers, attention variants, vision encoders (ViT), multi-modal/vision-language model frameworks, and experience with diffusion models and/or state space models.

  • Expert knowledge of GPU architecture fundamentals, CUDA, and low-level performance optimization using heterogeneous computing. Experience with TensorRT, compiler IRs, or equivalent inference optimization toolchains.

  • Solid understanding of embedded operating system internals (QNX/Linux), memory management, C/C++, and embedded/system software concepts.

  • Background in parallel programming (e.g., CUDA, OpenMP) and experience reasoning about memory hierarchies, data movement, and compute utilization.

  • Demonstrated capability to collaborate directly with external partners and customers in a deep technical role, solving their workload issues, identifying performance problems, and providing solutions within production limitations.

Ways to Stand Out from the Crowd:

  • Experience with ML compiler frameworks (TVM, MLIR, XLA, Triton) or contributing to inference runtime development.

  • Production deployment experience with autonomous vehicle perception or planning stacks, understanding the full pipeline from sensor input through trajectory output.

  • Familiarity with the Physical AI model landscape: VLM + action expert architectures, end-to-end driving models, or robot foundation models.

  • Contributions to MLPerf benchmarks and large-scale industry performance optimization efforts.

  • Experience with automotive safety standards (ISO 26262, SOTIF) and their implications for inference system development.

  • Experience leading technical initiatives across globally distributed engineering teams.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 225,000 CAD - 275,000 CAD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 2, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is the world leader in accelerated computing, pioneering the GPU and driving advances in AI, high-performance computing, gaming, autonomous vehicles, and robotics.

Similar jobs

You might also be interested in...

AI2w

ML Runtime Optimization Engineer - Lead

Applied Intuition

Sunnyvale, California, United States (On-site)

$199.3k – $264.5k Yearly

D-1w

AI / ML System Software Engineer, Senior Staff

d-Matrix

Santa Clara, California, United States (Hybrid)

$180k – $280k Yearly

D-1w

Software Engineer, Staff - SIMD Kernels

d-Matrix

Santa Clara, California, United States or Remote (United States)

$190k – $300k Yearly

AI5d

ML Runtime Optimization Engineer

Applied Intuition

Mountain View, California, United States (On-site)

$159.1k – $199.3k Yearly

AI2w

Machine Learning Perception Software Engineer

Applied Intuition

Sunnyvale, California, United States (On-site)

$149.4k – $232.9k Yearly