1. Home
  2. AI Companies
  3. OpenEvidence
OP

OpenEvidence

About

OpenEvidence operates a HIPAA-compliant medical information platform that handles over 100 million AI-powered clinical consultations from U.S. doctors and frontline clinicians. The system functions as a natural language search and retrieval layer over medical literature, synthesizing evidence from trusted sources to deliver point-of-care clinical decision support. With more than 40% of U.S. physicians logging in daily, the platform addresses a core bottleneck in clinical workflows: the exponential growth of medical knowledge against fixed physician time budgets. The system surfaces relevant evidence in seconds rather than the hours traditional literature review requires.

The technical architecture supports evidence synthesis across landmark medical literature, aggregating content through clinical partnerships while maintaining compliance constraints required for healthcare settings. The platform serves as a knowledge management system that operates across practice environments - from academic medical centers to rural clinics - suggesting infrastructure designed for variable network conditions and diverse deployment contexts. Free access for verified U.S. healthcare professionals indicates a distribution model optimized for maximum clinician adoption rather than per-seat pricing common in enterprise healthcare software.

Core technical domains span clinical decision support, AI copilot functionality for clinicians, and content aggregation from medical literature sources. The system's reliability requirements are elevated given its role in clinical decision pathways affecting patient outcomes, demanding careful evaluation of failure modes where incorrect or incomplete evidence synthesis could influence treatment decisions.

Open roles at OpenEvidence

Explore 9 open positions at OpenEvidence and find your next opportunity.

OP

Software Engineer, Data Infrastructure

OpenEvidence

San Francisco, California, United States (On-site)

2mo ago
OP

Software Engineer, Product

OpenEvidence

San Francisco, California, US

2mo ago
OP

Software Engineer, Applied AI

OpenEvidence

San Francisco, California, United States (On-site)

2mo ago
OP

Make Your Own Role

OpenEvidence

San Francisco, California, US

2mo ago
OP

Platform Security

OpenEvidence

Miami, Florida, US

2mo ago
OP

Infrastructure Engineer

OpenEvidence

Miami, Florida, US

2mo ago
OP

Site Reliability

OpenEvidence

San Francisco, California, United States (On-site)

2mo ago
OP

Research Engineer / ML Scientist

OpenEvidence

San Francisco, California, United States (On-site)

2mo ago
OP

Member of Technical Staff

OpenEvidence

San Francisco, California, United States (On-site)

3mo ago

Similar companies

OP

OpenAI

OpenAI develops and deploys generative transformer models at scale, operating production systems that serve millions through ChatGPT, GPT model APIs, and the OpenAI API. The technical challenge spans the full stack: research engineering for novel model architectures, safety engineering for alignment and robustness, and production infrastructure for API deployment at scale. Teams work across research, product engineering, and operations, with work organized around both advancing model capabilities and maintaining reliability for deployed systems serving substantial user traffic. The core technical domains include model development for the GPT series, API infrastructure to support downstream applications, and safety research focused on making AGI beneficial. Engineering work involves trade-offs between model capability, inference cost, latency characteristics, and safety constraints. Research teams collaborate with product and engineering functions to move from experimental systems to production deployment, requiring expertise in distributed systems, model optimization, and operational complexity at scale. The company operates from San Francisco with international presence, positioning work as a global effort toward artificial general intelligence. Cross-functional teams include researchers, engineers, and operations staff working on problems ranging from foundational research to production reliability. The technical culture emphasizes rigorous safety practices alongside advancement of capabilities, with autonomy and ownership distributed across teams working on distinct components of the research-to-deployment pipeline.

741 jobs
PE

Perplexity

Perplexity operates an AI-powered answer engine processing over 150 million questions weekly across web, mobile, and enterprise platforms. Founded in 2022, the system combines real-time web search with multiple LLMs to deliver source-attributed answers. The architecture serves both consumer and enterprise workloads, with enterprise deployments requiring security guarantees for knowledge worker use cases including legal research partnerships with organizations like Latham & Watkins. The technical stack runs on AWS infrastructure with Terraform for provisioning, Python and Go for backend services, and PyTorch with DeepSpeed and FSDP for model training and inference. Data pipelines use dbt, SQL, Snowflake, and Databricks. Frontend implementations use React and TypeScript, with Docker containerization and Open Policy Agent for access control. This architecture must handle tail latency and throughput requirements for real-time search retrieval paired with LLM inference at consumer scale, while maintaining source credibility verification in the critical path. The engineering focus centers on information retrieval accuracy, model response quality, and citation reliability rather than advertising optimization. Production systems must balance inference cost against answer quality across multiple models, manage retrieval latency for real-time web indexing, and maintain reliability for both free-tier consumer traffic and enterprise SLA requirements. Pro tier monetization suggests capacity-based or model selection tiering rather than pure ad-based revenue.

76 jobs
SE

Sesame

Sesame builds voice interfaces through tight integration of hardware, software, and machine learning, pursuing research in speech generation, personality modeling, and multimodal ML. The company operates large GPU clusters to support ambitious research programs aimed at making computers lifelike through natural voice interaction, with development cycles measured in days rather than quarters. Backed by a16z, Sequoia, Spark, and Matrix, the technical effort spans PyTorch-based model development alongside Android and iOS deployment, with infrastructure supporting rapid iteration from whiteboard concepts to production systems. The engineering organization comprises an interdisciplinary team of long-tenured experts across machine learning, hardware, software, and entertainment backgrounds, operating from offices in San Francisco, Bellevue, and New York. Core technical domains include speech generation systems, personality modeling for voice companions, and multimodal ML architectures that coordinate audio and other sensory inputs. The product strategy emphasizes deliberate design choices to create voice interfaces that are nuanced and intimate rather than intrusive, with hardware engineering efforts targeting lightweight eyewear form factors for all-day wear. Infrastructure and operational requirements center on GPU cluster management to support training and inference for speech models, alongside mobile platform engineering for real-time voice processing. The technical challenge involves crossing the uncanny valley in voice interaction - achieving latency, naturalness, and contextual appropriateness simultaneously across diverse usage scenarios. Team composition reflects this: specialists in human-computer interaction work alongside ML researchers and hardware engineers to optimize the full stack from acoustic modeling through industrial design.

26 jobs
CD

Chai Discovery

Chai Discovery builds frontier AI foundation models to predict and reprogram biochemical molecular interactions, transforming drug discovery from empirical screening of billions of candidate sequences into deterministic computational design. The company's platform achieves over 85% success rates in designing molecules that meet drug-like properties - a fundamental shift from traditional approaches that require years of wet-lab iteration and billions in capital. Founded by researchers who co-invented protein language modeling and built state-of-the-art folding algorithms, Chai Discovery has shipped Chai-1 and Chai-2, breakthrough models for computational molecular design now deployed in production pharmaceutical workflows. The technical stack spans protein language modeling, protein folding algorithms, computational antibody design, and molecular interaction prediction. The platform handles previously undruggable targets, including GPCR agonist design with minimal experimental screening - a capability that addresses targets accounting for roughly 30% of marketed drugs but historically requiring extensive trial-and-error optimization. Design precision operates at atomic resolution, enabling drug-like antibody engineering with explicit control over molecular properties rather than stochastic library screening. Chai Discovery is backed by OpenAI, Thrive Capital, and General Catalyst, and maintains active partnerships with pharmaceutical companies including Eli Lilly. The company operates from the US under CEO Joshua Meier, deploying models that compress multi-year discovery timelines into computational workflows. For engineers, the inference challenge involves running large-scale protein structure prediction and molecular design models in production environments where latency and throughput directly gate pharmaceutical R&D cycles, with evaluation rigor defined by experimental validation rates rather than benchmark metrics.

14 jobs
BI

Bioptimus

Bioptimus builds foundation models for biology and biomedical applications, with H-Optimus as its flagship model for digital pathology. H-Optimus ranks #1 among 22 evaluated pathology foundation models and was trained on 2 billion images contributed by over 4,000 clinical practices. The company targets deployment across drug discovery, clinical trial analytics, and clinical decision support, with approximately 100 scientific publications annually leveraging their models and a projection toward 1 million total downloads across their model family. The technical challenge centers on multiscale biological data integration - mapping representations from molecular to organism level while maintaining utility across diverse downstream tasks. Model evaluation in this domain involves trade-offs between pretraining data volume, task transfer performance, and domain-specific fine-tuning requirements. Digital pathology specifically presents bottlenecks in gigapixel image processing, label noise from clinical annotation workflows, and distribution shift between training cohorts and deployment sites. Bioptimus has raised $76 million and operates from France. The company's stated goal is producing a universal foundation model for biology, positioning their work at the intersection of large-scale self-supervised learning, biological domain knowledge encoding, and production deployment in regulated healthcare environments.

4 jobs