HA

Applied MLE Evaluations

Hippocratic AI
Posted onFeb 18, 2026
LocationPalo Alto, California, United States (On-site)
Employment typeFull-time

About Us

Hippocratic AI is the leading generative AI company in healthcare. We have the only system that can have safe, autonomous, clinical conversations with patients. We have trained our own LLMs as part of our Polaris constellation, resulting in a system with over 99.9% accuracy.

Why Join Our Team

Reinvent healthcare with AI that puts safety first. We’re building the world’s first healthcare‑only, safety‑focused LLM — a breakthrough platform designed to transform patient outcomes at a global scale. This is category creation.

Work with the people shaping the future. Hippocratic AI was co‑founded by CEO Munjal Shah and a team of physicians, hospital leaders, AI pioneers, and researchers from institutions like El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.

Backed by the world’s leading healthcare and AI investors. We recently raised a $126M Series C at a $3.5B valuation, led by Avenir Growth, bringing total funding to $404M with participation from CapitalG, General Catalyst, a16z, Kleiner Perkins, Premji Invest, UHS, Cincinnati Children’s, WellSpan Health, John Doerr, Rick Klausner, and others.

Build alongside the best in healthcare and AI. Join experts who’ve spent their careers improving care, advancing science, and building world‑changing technologies — ensuring our platform is powerful, trusted, and truly transformative.

Location Requirement

We believe the best ideas happen together. To support fast collaboration and a strong team culture, this role is expected to be in our Palo Alto office five days a week, unless otherwise specified.

About the Role

As an Applied Machine Learning Engineer – Evaluations at Hippocratic AI, you’ll be at the core of how we measure, understand, and improve our voice-based generative AI healthcare agents.

Your work will translate complex, qualitative notions of empathy, safety, and accuracy into quantitative evaluation signals that guide model iteration and deployment.

You’ll design and implement evaluation harnesses, analysis tools, and visualization systems for multimodal agents that use language, reasoning, and speech.

Partnering closely with research, product, and clinical teams, you’ll ensure every model update is grounded in data, validated against real-world scenarios, and continuously improving in both intelligence and bedside manner.

This is a hands-on, experimental role for ML engineers who care deeply about quality, safety, and user experience—and who thrive at the intersection of research and product.

What You'll Do:

  • Design and implement evaluation harnesses for multimodal agent tasks, spanning speech, text, reasoning, and interaction flows.

  • Build interactive visualization and analysis tools that help engineers, researchers, and clinicians inspect model and UX performance.

  • Define, automate, and maintain continuous evaluation pipelines, ensuring regressions are caught early and model releases improve real-world quality.

  • Collaborate with product and clinical teams to translate qualitative healthcare goals (e.g., empathy, clarity, compliance) into measurable metrics.

  • Analyze evaluation data to uncover trends, propose improvements, and support iterative model tuning and fine-tuning.

What You Bring

Must-Have:

  • 4+ years of experience in applied ML, ML engineering, or AI evaluation, with a focus on building and analyzing model pipelines.

  • Strong skills in Python, with experience in data processing, experiment tracking, and model analysis frameworks (e.g., Weights & Biases, MLflow, Pandas).

  • Familiarity with LLM evaluation methods, speech-to-text/text-to-speech models, or multimodal systems.

  • Understanding of prompt engineering, model fine-tuning, and retrieval-augmented generation (RAG) techniques.

  • Comfortable collaborating with cross-functional partners across research, product, and design teams.

  • Deep interest in AI safety, healthcare reliability, and creating measurable systems for model quality.

Nice-to-Have:

  • Experience building human-in-the-loop evaluation systems or UX research tooling.

  • Knowledge of visualization frameworks (e.g., Streamlit, Dash, React) for experiment inspection.

  • Familiarity with speech or multimodal model evaluation, including latency, comprehension, and conversational flow metrics.

If you’re passionate about understanding how AI behaves, measuring it rigorously, and helping shape the next generation of clinically safe, empathetic voice agents, we’d love to hear from you.

Join Hippocratic AI and help set the benchmark for evaluation-driven AI development in healthcare.

Please be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.comemail addresses. We will never request payment or sensitive personal information during the hiring process.

Hippocratic AI

View company profile

Hippocratic AI develops safety-focused LLMs for healthcare, having completed over 150 million clinical patient interactions and deploying 1000+ AI agents to address the global healthcare worker shortage.

Similar jobs

You might also be interested in...

SC2w

AI Research Engineer, Enterprise Evaluations

Scale

San Francisco, California, United States (On-site)

$179.4k – $224.3k Yearly

PE2w

AI Engineer, Applied ML

Perplexity

San Francisco, California, United States (On-site)

$210k – $385k Yearly

DE2w

Staff Research Engineer, Voice

Decagon

San Francisco, California, United States (On-site)

$350k – $475k Yearly

OP1w

Backend Software Engineer (Evals) – Support Automation Engineering

OpenAI

San Francisco, California, United States (On-site)

$255k – $405k Yearly

SC5d

ML Research Engineer, ML Systems

Scale

San Francisco, California, United States (On-site)

$218.4k – $273k Yearly