BA

Software Engineer - Model Performance

Baseten
Posted onFeb 18, 2026
LocationSan Francisco, California, United States | New York, New York, United States (On-site)
Employment typeFull-time
Salary$150k – $250k Yearly

ABOUT BASETEN

Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $150M Series D, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.

THE ROLE

Are you passionate about advancing the application of artificial intelligence? We are looking for a Software Engineer focused on ML performance to join our dynamic team. This role is ideal for someone who thrives in a fast-paced startup environment and is eager to make significant contributions to the exciting field of LLM Inference. If you are a backend engineer who thrives on making things faster and is excited about open-source ML models, we look forward to your application.

EXAMPLE INITIATIVES

You'll get to work on these types of projects as part of our Model Performance team:


RESPONSIBILITIES

  • Implement, refine, and productionize cutting-edge techniques (quantization, speculative decoding, kv cache reuse, chunked prefill and LoRA) for ML model inference and infrastructure.

  • Deep dive into underlying codebases of TensorRT, PyTorch, TensorRT-LLM, vllm, sglang, CUDA, and other libraries to debug ML performance issues.

  • Apply and scale optimization techniques across a wide range of ML models, particularly large language models.

  • Collaborate with a diverse team to design and implement innovative solutions.

  • Own projects from idea to production.

REQUIREMENTS

  • Bachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.

  • Experience with one or more general-purpose programming languages, such as Python or C++.

  • Familiarity with LLM optimization techniques (e.g., quantization, speculative decoding, continuous batching).

  • Strong familiarity with ML libraries, especially PyTorch, TensorRT, or TensorRT-LLM.

  • Demonstrated interest and experience in LLM’s.

  • Deep understanding of GPU architecture.

  • Bonus:

    • Proficiency in enhancing the performance of software systems, particularly in the context of large language models (LLMs).

    • Experience with CUDA or similar technologies.

    • Deep understanding of software engineering principles and a proven track record of developing and deploying AI/ML inference solutions.

    • Experience with Docker and Kubernetes.

BENEFITS

  • Competitive compensation, including meaningful equity.

  • 100% coverage of medical, dental, and vision insurance for employee and dependents

  • Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)

  • Paid parental leave

  • Company-facilitated 401(k)

  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.

Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.

At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.

Salary: $150K – $250K • Offers Equity

Baseten is an AI infrastructure platform providing the tooling, expertise, and hardware needed to deploy and scale AI models in production.

Similar jobs

You might also be interested in...

TA5d

LLM Inference Frameworks and Optimization Engineer

Together AI

San Francisco, California, United States (On-site)

$160k – $230k Yearly

SC5d

AI Infrastructure Engineer, Model Serving Platform

Scale

San Francisco, California, United States (On-site)

$179.4k – $224.3k Yearly

PE2w

AI Inference Engineer (San Francisco)

Perplexity

San Francisco, California, United States (On-site)

$210k – $385k Yearly

PE2w

Inference Engineering Manager

Perplexity

San Francisco, California, United States (On-site)

$300k – $385k Yearly

SE2w

ML Model Serving Engineer

Sesame

San Francisco, California, United States (On-site)

$175k – $280k Yearly