AC

Research Engineer

Applied Compute
Posted onFeb 23, 2026
LocationSan Francisco, California, United States (On-site)
Employment typeFull-time

Who We Are


We build Specific Intelligence for the enterprise: agents that continuously learn from a company's processes, data, expertise, and goals. Today, there's a massive gap between what AI models can do in isolation and what they reliably do inside real businesses; these systems fail because they don't adapt to feedback. We're building the continual learning layer: a platform that captures context, memory, and decision traces across the enterprise, providing an environment where specialized agents learn how to do real work.

Why we're excited:We get to work at a rare intersection. Our product team builds the platform powering a new generation of digital coworkers. Our research team pushes the frontier of post-training and reinforcement learning to create new product experiences. Our applied research engineers sit side-by-side with customers as they ship models into production. This combination of strong product, deep research, and boots on the ground is what we believe it takes to bring AI to the enterprise. We are product-led, research-enabled, and forward-deployed.

Our Team: We are a team of engineers, researchers, and operators. Many of us are former founders. We've built RL infrastructure at OpenAI, data foundations at Scale AI, and systems at Together, Two Sigma, and Watershed. We work with F50 customers in addition to DoorDash, Mercor, and Cognition. We’re fortunate to be backed by Benchmark, Sequoia, Lux, and others.

Who Thrives Here: We're looking for people who are excited about applying novel research and complex systems to real-world problems. You should be comfortable navigating unfamiliar environments quickly, whether that's a new codebase, a new customer's data architecture, or a problem domain you've never seen before. You should also genuinely enjoy working with customers: listening, empathizing, and understanding how work actually gets done in their organizations. Former founders, people who've built a lot of side projects, or anyone who's shown they can own something end-to-end, tend to do well here.

The Role

As a research engineer, you'll train frontier-scale models and develop the methods that make continual learning work inside enterprise environments. You'll design and run experiments at scale, explore cutting-edge RL techniques, and build the tools that let us understand what's actually happening during training. This role sits at the intersection of research and systems. You'll invent new algorithms alongside researchers, then work with infrastructure engineers to run them on GPUs.

What You'll Do

  • Post-train frontier-scale language models on enterprise tasks and environments

  • Explore and develop RL techniques, co-designing algorithms and systems

  • Contribute to Alchemy, our data research program for generating signal-rich training environments from production data

  • Build high-performance internal tools for probing, debugging, and analyzing training runs

  • Partner with infrastructure engineers to scale training and inference efficiently

What We're Looking For

  • Experience training or serving large language models

  • Experience building RL environments and evaluations for language models

  • Proficiency in PyTorch, JAX, or similar ML frameworks, with experience in distributed training

  • Strong experimental design skills—you know how to set up experiments that actually answer questions

Strong Candidates Also Have

  • Background in pre-training or post-training research

  • Previous experience in high-performance computing environments or large-scale clusters

  • Contributions to open-source ML research or infrastructure

  • Demonstrated technical creativity through published research, OSS contributions, or side projects

Logistics

This role is based in San Francisco. We work in-person at our office in the Design District. We offer competitive compensation and equity, generous health benefits, unlimited PTO, paid parental leave, daily lunches and dinners, transportation, and relocation support. We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the process with you.

We encourage you to apply even if you do not believe you meet every single qualification. As set forth in Applied Compute’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Applied Compute

View company profile

Applied Compute builds Specific Intelligence for enterprises, training custom AI models and deploying in-house agent workforces using proprietary company data. Founded by former OpenAI researchers, the company is backed by $80M from Benchmark, Sequoia, and Lux Capital.

Similar jobs

You might also be interested in...

MA2w

Research Engineer

Magic

San Francisco, California, United States (On-site)

$225k – $550k Yearly

AN5d

Research Engineer, Machine Learning (Horizons)

Anthropic

San Francisco, California, United States (Hybrid)

$280k – $425k Yearly

AN5d

Research Engineer, Discovery

Anthropic

San Francisco, California, United States (Hybrid)

$340k – $425k Yearly

AN5d

Research Engineer, Interpretability

Anthropic

San Francisco, California, United States (Hybrid)

$315k – $560k Yearly

TM5d

Research Engineer, Infrastructure, Inference

Thinking Machines Lab

San Francisco, California, United States (On-site)

$350k – $475k Yearly