1. Home
  2. Jobs
  3. United States
  4. California
  5. Sunnyvale
  6. Staff Software Engineer
  7. Staff Software Engineer, Inference Cloud
CE
Cerebrascerebras.ai

Staff Software Engineer, Inference Cloud

Sunnyvale, California, United StatesFull-time2h ago

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. 

Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.

Location: Sunnyvale 

We're hiring a Staff Engineer to own major areas of the architecture of our Inference Cloud Platform. This team owns the cloud layer behind our Inference Service, with responsibility for availability, latency, reliability, and global scale. 

This is a hands on IC role for an engineer who wants to work on the hardest distributed systems problems in the stack: multi-region traffic architecture, graceful degradation under bursty AI workloads, performance at high QPS, and the operating model for a platform that has to stay fast and available under load. You'll write code, lead key architectural decisions in your domain, debug production issues, and help shape technical direction across adjacent teams. 

If you're interested in building the next-generation architecture of a globally distributed inference platform, we'd like to talk. 

Responsibilities 

  • Platform Direction. Help shape the technical direction for the Inference Cloud Platform, including multi-region topology, failure domains, service boundaries, and system evolution over time, and own the roadmap for major technical areas. 
  • Core Cloud Systems. Design and build critical platform components such as service discovery, request routing, load balancing, caching, batching, and traffic management for AI inference workloads. 
  • Reliability & Performance. Architect active-active systems with rapid failover, graceful degradation, and clear SLOs. Drive system-level improvements in latency, throughput, capacity efficiency, and resilience under unpredictable demand. 
  • Traffic Control & Service Tiers. Define platform mechanisms for admission control, quota management, rate limiting, and differentiated quality of service across workload types and customer tiers. 
  • Execution on Critical Paths. Write and review production code in the most important parts of the platform. Make high-consequence architectural decisions within your area and set the technical bar through design reviews, code reviews, and sound engineering judgment. 
  • Production Leadership. Lead on the hardest production issues and cross-system bottlenecks. Drive observability, incident response, capacity planning, and post-incident improvement with a high standard for operational rigor.  
  • Technical Influence. Partner with ML, Product, Infrastructure, and Platform teams to translate product and business requirements into scalable system designs, and drive alignment on shared technical decisions within your domain and adjacent platform surfaces. 
  • Mentorship. Raise the effectiveness of senior engineers through design feedback, pairing, and clear technical standards. 

Skills & Qualifications 

  • 8+ years of experience in software engineering, with substantial individual contributor experience building and operating large-scale distributed systems or cloud infrastructure. 
  • Deep expertise in distributed systems architecture in cloud environments, including networking, compute orchestration, container platforms, and multi-region production services. 
  • Strong track record of making sound architectural decisions for highly available, latency-sensitive systems at scale. 
  • Experience optimizing latency, throughput, and efficiency in high-QPS systems. Experience with TTFT and tail-latency reduction is a strong plus. 
  • Strong proficiency in backend or systems languages such as Go, C++, or Python, with the expectation that you can contribute production code directly. 
  • Experience designing observability and reliability practices, including metrics, logging, tracing, alerting, incident response, and SLO-driven operations. 
  • Ability to influence senior engineers and cross-functional partners through technical credibility, communication, and judgment, especially within your domain and adjacent systems. 
  • Experience with ML inference infrastructure, model serving systems, or GPU-accelerated workloads is a plus. 

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2026.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.