WHO WE ARE
We build Specific Intelligence for the enterprise: agents that continuously learn from a company's processes, data, expertise, and goals. We're building the continual learning layer and platform that captures context, memory, and decision traces across the enterprise, providing an environment where specialized agents learn how to do real work.
Why we're excited: We get to work at a rare intersection. Our product team builds the platform powering a new generation of digital coworkers. Our research team pushes the frontier of post-training and reinforcement learning to create new product experiences. Our applied research engineers sit side-by-side with customers as they ship agents into production. This combination of strong product, deep research, and boots on the ground is what we believe it takes to bring AI to the enterprise. We are product-led, research-enabled, and forward-deployed.
Our Team: We are a team of engineers, researchers, and operators. Many of us are former founders. We've built RL infrastructure at OpenAI, data foundations at Scale AI, and systems at Together, Two Sigma, Watershed, and other teams. We work with F50 customers, and we’re fortunate to be backed by Kleiner Perkins, Benchmark, Sequoia, Lux, Greenoaks, and others.
Who Thrives Here: We're looking for people who are excited about applying novel research and complex systems to real-world problems. You should be comfortable navigating unfamiliar environments quickly, whether that's a new codebase, a new customer's data architecture, or a problem domain you've never seen before. Our team genuinely enjoys working with customers: listening, empathizing, and understanding how work actually gets done in their organizations. Former founders, people who've built a lot of side projects, or anyone who's shown they can own something end-to-end, tend to do well here.
THE ROLE
As an infrastructure engineer, you'll build the foundational deployments that everything else runs on. You'll own the systems that make Applied Compute's Agent Cloud reliable, secure, and deployable into enterprise environments: sandboxed execution environments, orchestration infrastructure, networking and security middleware, data connectors, and the deployment machinery that provisions our systems into customer VPCs. This is the layer that turns a collection of applications and models into a production-grade enterprise platform.
What You'll Do
Build the agent sandboxing system: secure, isolated, and rich execution environments using microVMs and container orchestration
Own the orchestration layer that coordinates agent sessions, LLM pipelines, and background learning jobs
Design and implement the authentication, authorization, and audit logging middleware at the boundary between Applied Compute and customer environments
Build the integration layer that connects agents to tools, external APIs, and MCP servers across applications
Architect the data layer across OLAP, OLTP, and blob data stores, and build mechanisms that keep them coherent
Develop the deployment and provisioning infrastructure (CLI, container registry, resource management) for engineers to ship and manage customer deployments
Ensure the platform meets enterprise security, networking, and compliance requirements across diverse customer cloud environments (AWS, Azure, GCP)
What We're Looking For
Deep systems engineering experience: containers, orchestration, networking, security, and distributed systems
Experience building and operating infrastructure that runs in customer environments (on-prem, VPC, or hybrid cloud)
Strong understanding of security fundamentals: sandboxing, isolation, identity management, secrets handling, audit logging
Comfort with infrastructure-as-code, CI/CD, Kubernetes, and modern deployment tooling
Ability to reason about system reliability, fault tolerance, and operational concerns at scale
Strong Candidates Also Have
Experience with microVM or container isolation technologies (Firecracker, gVisor, Kata Containers)
Background with workflow orchestration systems (Temporal, Cadence, or similar)
Experience building multi-tenant platforms deployed into enterprise environments
Familiarity with WebRTC, browser automation (CDP), or remote desktop streaming
Background with ML infrastructure: GPU scheduling, model serving, training pipelines
Previous experience as a founder or early engineer at a zero-to-one company
BENEFITS + LOGISTICS
This role is based in San Francisco. We work in-person at our office in the Design District.
We offer competitive compensation and equity, generous health benefits, unlimited PTO, paid parental leave, daily lunches and dinners, transportation, retirement plans, and relocation support. We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the process with you.
We encourage you to apply even if you do not believe you meet every single qualification. As set forth in Applied Compute’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.