About

Heidi builds an AI Care Partner that automates clinical documentation, form filling, and task management for clinicians worldwide. The system has returned over 18 million hours to clinicians in 18 months and currently supports more than 2 million patient visits weekly across 116 countries and 110+ languages. The company has raised nearly $100 million from Point72, Anthropic, and Blackbird, with a stated goal of halving the time required to deliver patient-first care.

The core technical challenge sits at the intersection of multilingual NLP, healthcare informatics, and production reliability at global scale. The system must handle clinical documentation workflows across diverse regulatory environments, languages, and medical specialties while maintaining accuracy and latency requirements that directly impact clinician workflows. The stack spans TypeScript, React, Next.js, and Node.js on the frontend with Python, NestJS, and Express on the backend, using PostgreSQL and MongoDB for persistence and running on GCP and AWS infrastructure.

The team includes clinicians, engineers, and designers, with most employees having healthcare backgrounds or direct experience with clinician burnout. Operational philosophy emphasizes shipping small, fast iteration cycles, and tolerance for failure in pursuit of reducing administrative burden. The Australian-based company operates globally with Docker-based deployments and CI/CD pipelines supporting continuous delivery across production environments.

Open roles at Heidi

Explore 90 open positions at Heidi and find your next opportunity.

HE

Staff AI Engineer (Orchestration)

Heidi

Melbourne, Victoria, Australia (Hybrid)

3w ago
HE

Clinical AI Engineer

Heidi

Sydney, New South Wales, Australia (Hybrid)

3w ago
HE

SEO Content Specialist

Heidi

Makati, Metro Manila, Philippines (Hybrid)

3w ago
HE

Brand Producer

Heidi

Melbourne, Victoria, Australia (Hybrid)

3w ago
HE

Product Manager, Models

Heidi

Sydney, New South Wales, Australia (Hybrid)

3w ago
HE

SEO Outreach Specialist

Heidi

Makati City, Metro Manila, Philippines (Hybrid)

3w ago
HE

Senior LLMOps Engineer

Heidi

Melbourne, Victoria, Australia (Hybrid)

3w ago
HE

Mid Level AI Models Engineer

Heidi

Melbourne, Victoria, Australia (Hybrid)

3w ago
HE

Senior AI Models Engineer

Heidi

Melbourne, Victoria, Australia (Hybrid)

3w ago
HE

Senior AI Engineer

Heidi

Melbourne, Victoria, Australia (Hybrid)

3w ago
HE

SEO Growth Specialist

Heidi

Makati City, Metro Manila, Philippines (Hybrid)

3w ago
HE

Revenue Operations Associate

Heidi

New York, United States (On-site)

$85K – $115K Yearly1mo ago
HE

Clinical Director

Heidi

London, England, United Kingdom (Hybrid)

1mo ago
HE

IT Administrator

Heidi

New York, United States (On-site)

$110K – $135K Yearly1mo ago
HE

Multilingual SEO Strategist

Heidi

Makati, Metro Manila, Philippines (Hybrid)

1mo ago
HE

Team Lead, Accounts Receivable

Heidi

Makati City, Metro Manila, Philippines (Hybrid)

1mo ago
HE

Accounts Receivable Analyst | NAMER

Heidi

Makati City, Metro Manila, Philippines (Hybrid)

1mo ago
HE

Technical Recruiter

Heidi

London, England, United Kingdom (Hybrid)

1mo ago
HE

Performance Creative Strategist

Heidi

Sydney, New South Wales, Australia (Hybrid)

1mo ago

Similar companies

EL

EliseAI

EliseAI builds a unified conversational AI platform for property management and healthcare operations, automating workflows that span leasing tours, maintenance requests, patient scheduling, and intake forms. Founded in 2017, the company serves over 600 property owners and healthcare operators managing 5 million+ units, having raised $360 million in funding. The engineering organization ships 175+ new features per year, reflecting a rapid iteration cycle informed by frontline user feedback. The platform consolidates functionality that would otherwise require multiple point solutions, addressing operational bottlenecks in high-volume, repetitive administrative tasks. In property management, this includes conversational AI for leasing tour coordination and maintenance request handling. In healthcare, the system automates patient scheduling and intake form collection. The technical approach centers on a single platform architecture rather than a collection of disconnected tools, with production deployment at scale across both industry verticals. The company's engineering culture emphasizes shipping velocity and product development driven by operational constraints observed in production environments. The 175+ annual feature releases suggest continuous deployment practices and tight feedback loops between product iteration and user-facing workflows. Development priorities appear structured around reducing latency in administrative operations and improving throughput for organizations managing thousands of concurrent interactions across property portfolios or patient populations.

113 jobs
EV

Eve

Eve builds AI-native infrastructure for plaintiff law firms, operating as an intelligent case assistant platform that manages litigation workflows from intake through resolution. The system processes more than 200,000 legal cases annually, handling case evaluation, medical chronology generation, demand letter drafting, and discovery responses. Developed in collaboration with OpenAI and Anthropic, the platform learns each firm's tone and style to generate documents that match attorney output, with attorneys able to train and teach the system for their specific practice patterns. The platform targets labor and employment practices and personal injury firms. Client firms report 250% year-over-year revenue growth and 2.5X case capacity increases without additional headcount, though these are self-reported outcomes rather than platform-wide guarantees. Eve claims to be the first legal AI to achieve SOC II Type 2 certification while maintaining HIPAA compliance, addressing the compliance and security requirements of handling protected health information and sensitive legal data at scale. The technical challenge set involves natural language processing for document generation, AI workflow development that adapts to individual firm processes, and maintaining enterprise-grade security infrastructure. The platform must handle the operational complexity of legal document generation across varied practice areas while meeting regulatory requirements for data handling in the legal and healthcare domains.

39 jobs
RU

Runway

Runway is an applied AI research company developing foundational General World Models and generative video systems. The company's technical focus centers on building models that simulate, perceive, generate, and act - spanning generative video (Gen-4.5), world modeling (GWM-1), and simulation-based learning. Technical domains include embodied AI, perception systems, and foundational model development, with stated applications extending from creative tooling to scientific simulation and robotics use cases. The production infrastructure runs on AWS, using Fargate for compute orchestration, S3 and CloudFront for storage and distribution, and Lambda with Kinesis and SQS for event-driven processing. The model training and serving stack is built on PyTorch and TorchScript, with Kubernetes managing workloads via Flyte for workflow orchestration and Kueue for job scheduling. Observability relies on Prometheus and Grafana; infrastructure is provisioned through Terraform. The application layer uses TypeScript. Runway operates partnerships with NVIDIA and Lionsgate, runs Runway Studios and an AI Film Festival, and targets creators in art, entertainment, and filmmaking verticals. The company frames its mission around accelerating iteration through simulation rather than real-world experimentation, positioning world models as a mechanism to compress trial-and-error cycles across creative and scientific domains.

34 jobs
RE

Reka

Reka builds unified multimodal foundation models that process text, images, video, and audio. The company's core technical focus is modeling the physical world through systems that handle perception, reasoning, and action across modalities. The team includes researchers and engineers from Google DeepMind and Facebook AI Research working on inference-critical domains including GPU performance engineering, computer vision, audio processing, and natural language understanding. The technical stack centers on Python, PyTorch, and JAX for model development, with CUDA and C++ for performance-critical components. Infrastructure runs on Kubernetes and Slurm for orchestration and job scheduling. Engineering roles emphasize end-to-end ownership - individuals work across the stack from model architecture through deployment, addressing bottlenecks in latency, throughput, and operational complexity at production scale. Reka operates remote-first, aggregating global talent into a distributed systems organization. The work targets enterprise and organizational deployments where multimodal capabilities must meet reliability and cost constraints. Team structure reflects early-stage dynamics: engineers wear multiple hats, and technical decisions directly shape product capabilities and production characteristics.

3 jobs
TO

Toma

Toma operates a voice AI platform for automotive dealerships, processing over 1,000,000 calls since launching in 2024. The system handles inbound phone operations - service scheduling, call routing, and follow-up automation - with safeguards designed to manage transfer latency and revenue leakage. Core technical challenge: maintaining conversational quality and intent detection accuracy across high-variance dealership scenarios (service appointments, parts inquiries, sales handoffs) while minimizing false transfers and dropped context. The platform implements transfer triggers, clawback mechanisms for mistimed handoffs, and follow-up alerts when human staff doesn't complete actions, addressing the operational complexity of human-AI transition points in production telephony. Infrastructure runs on AWS with a TypeScript/Next.js frontend, PostgreSQL via Prisma for state management, and tRPC for type-safe API boundaries. The voice AI layer must handle real-time constraints - low-latency speech recognition and synthesis, sub-second intent classification - while managing concurrent call volume and dealership-specific context (inventory, scheduling systems, staff availability). Trade-offs center on model selection for conversational understanding versus inference cost at scale, and the reliability surface area of integrating with legacy dealership management systems. Founded by engineers from Scale AI, Uber, Lyft, and Amazon; backed by Andreessen Horowitz and Y Combinator with $17 million Series A funding. Deployment spans dealerships across the United States, including Pohanka Automotive Group, SCHOMP, Hudson Automotive Group, and Bergey's. Primary bottlenecks likely involve tuning voice models for domain-specific terminology (vehicle makes, service codes, dealership jargon), managing tail latency in transfer decisions where milliseconds impact customer experience, and evaluating conversational success beyond simple call completion - did the AI correctly capture appointment details, route urgency appropriately, preserve customer satisfaction? The system's value proposition hinges on converting missed calls and staff bottlenecks into captured revenue, which requires high precision on intent classification and low false-negative rates on transfer triggers to avoid revenue loss from mishandled interactions.

1 job
WA

Wabi

Wabi is the first personal software platform, transforming how people interact with technology through AI-powered mini apps. With $20 million in pre-seed funding, the company has quickly established itself as a pioneer in the User-Generated Software (UGS) movement, enabling anyone to create, share, and remix personalized applications without writing code. Founded by Eugenia Kuyda, former CEO of Replika, Wabi is building what investors call the "YouTube of apps" - a social platform where millions of creators can build and distribute software tailored to individual needs, tastes, and contexts. The platform represents a fundamental shift from one-size-fits-all applications to truly personal software experiences. Rather than searching for apps that approximately match their needs, users describe their exact requirements in natural language, and Wabi generates custom mini apps optimized for their specific routines, preferences, and life situations. Operating with a lean team of 2-10 employees, Wabi is positioned at the forefront of AI-driven creativity, turning every user into a potential software developer and ushering in a new era where software is made for all of us, by all of us.