1. Home
  2. Jobs
  3. United Kingdom
  4. AI Engineer
  5. Member of Engineering (Inference)
PO

Member of Engineering (Inference)

Poolside
Posted onFeb 18, 2026
LocationUnited Kingdom or Remote (Europe + 1 more)
Employment typeFull-time

ABOUT POOLSIDE

In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.


poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.

View GDPR Policy

ABOUT OUR TEAM

We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.

Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

ABOUT THE ROLE

You will be focused on building out our multi-device inference of Large Language Models, both standard transformers and custom linear attention architectures. You will be working with lowered precision inference and tensor parallelism. You will be comfortable diving into vLLM, Torch, AWS libraries. You will be working on improvements for both NVIDIA and AWS hardware. You will be working on the bleeding edge of what's possible and will find yourself, hacking and testing the latest vendor solutions. We are rewrite-in-Rust-friendly.

YOUR MISSION

To develop and continuously improve the inference of LLMs for source code generation, optimizing for the lowest latency, the highest throughput, and the best hardware utilization.

RESPONSIBILITIES

  • Follow the latest research on LLMs, inference and source code generation

  • Propose and evaluate innovations, both in the quality and the efficiency of the inference

  • Monitor and implement LLM inference metrics in production

  • Write high-quality high-performance Python, Cython, C/C++, Triton, ThunderKittens, native CUDA, Amazon Neuron code

  • Work in the team: plan future steps, discuss, and always stay in touch

SKILLS & EXPERIENCE

  • Experience with Large Language Models (LLM)

    • Confident knowledge of the computational properties of transformers

    • Knowledge/Experience with cutting-edge inference tricks

    • Knowledge/Experience of distributed and lower precision inference

    • Knowledge of deep learning fundamentals

  • Strong engineering background

    • Theoretical computer science knowledge is a must

    • Experience with programming for hardware accelerators

    • SIMD algorithms

    • Expert in matrix multiplication bottlenecks

    • Know hardware operation latencies by heart

  • Research experience

    • Nice to have but not required: Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc

    • Can freely discuss the latest papers and descend to fine details

    • You have strong opinions, weakly held

  • Programming experience

    • Linux

    • Git

    • Python with PyTorch or Jax

    • C/C++, CUDA, Triton, ThunderKittens

    • Use modern tools and are always looking to improve

    • Opinionated but reasonable, practical, and not afraid to ignore best practices

    • Strong critical thinking and ability to question code quality policies when applicable

    • Prior experience in non-ML programming is a nice to have

PROCESS

  • Intro call with one of our Founding Engineers

  • Technical Interview(s) with one of our Founding Engineers

  • Team fit call with the People team

  • Final interview with one of our Founding Engineers

BENEFITS

  • Fully remote work & flexible hours

  • 37 days/year of vacation & holidays

  • Health insurance allowance for you and dependents

  • Company-provided equipment

  • Wellbeing, always-be-learning and home office allowances

  • Frequent team get togethers

  • Great diverse & inclusive people-first culture

Poolside builds foundation models and AI agents for the enterprise, starting with software development. We're on a mission to reach AGI through reinforcement learning, believing software engineering is the fastest path to human-level intelligence.

Similar jobs

You might also be interested in...

CE5d

Senior Research Engineer - Inference ML

Cerebras

Sunnyvale, California, United States (Hybrid)

CO3w

Software Engineer, Inference AI/ML

CoreWeave

Sunnyvale, California, United States (Hybrid)

$92k – $135k Yearly

PE2w

AI Inference Engineer (London)

Perplexity

London, England, United Kingdom (On-site)

NV4d

Senior Software Engineer, AI Inference Systems

NVIDIA

Toronto, Ontario, Canada (Hybrid)

C$170k – C$275k Yearly

TM5d

Research Engineer, Infrastructure, Inference

Thinking Machines Lab

San Francisco, California, United States (On-site)

$350k – $475k Yearly