1. Home
  2. AI Companies
  3. Bioptimus
BI

Bioptimus

About

Bioptimus builds foundation models for biology and biomedical applications, with H-Optimus as its flagship model for digital pathology. H-Optimus ranks #1 among 22 evaluated pathology foundation models and was trained on 2 billion images contributed by over 4,000 clinical practices. The company targets deployment across drug discovery, clinical trial analytics, and clinical decision support, with approximately 100 scientific publications annually leveraging their models and a projection toward 1 million total downloads across their model family.

The technical challenge centers on multiscale biological data integration - mapping representations from molecular to organism level while maintaining utility across diverse downstream tasks. Model evaluation in this domain involves trade-offs between pretraining data volume, task transfer performance, and domain-specific fine-tuning requirements. Digital pathology specifically presents bottlenecks in gigapixel image processing, label noise from clinical annotation workflows, and distribution shift between training cohorts and deployment sites.

Bioptimus has raised $76 million and operates from France. The company's stated goal is producing a universal foundation model for biology, positioning their work at the intersection of large-scale self-supervised learning, biological domain knowledge encoding, and production deployment in regulated healthcare environments.

Open roles at Bioptimus

Explore 3 open positions at Bioptimus and find your next opportunity.

BI
BI

Workplace Operations Specialist

Bioptimus

Paris, Paris, France (Hybrid)

4w ago
BI

Biology Data Quality Engineer

Bioptimus

France + 3 more (Remote)

2mo ago

Similar companies

CE

Cerebras

Cerebras Systems designs and manufactures wafer-scale AI chips that consolidate the compute capacity of dozens of GPUs into a single device. Founded in 2015, the company's core architecture is 56 times larger than standard GPUs, addressing the operational complexity of distributed training and inference by offering programmability equivalent to a single-device system while delivering multi-GPU performance. This approach collapses the network bottlenecks and synchronization overhead inherent in GPU clusters, enabling users to run large-scale ML workloads without orchestrating hundreds of accelerators. The company's technical stack spans the full systems hierarchy: custom silicon (wafer-scale chip architecture), compiler infrastructure (MLIR, LLVM IR, and their proprietary CSL language), runtime orchestration (Kubernetes), and deployment tooling. Engineering work touches computer architecture, deep learning kernels, systems software for hardware programmability, and inference serving at scale. Recent partnerships include work with OpenAI on inference deployment, alongside engagements with national laboratories, global enterprises, and healthcare systems requiring high-throughput ML serving. Cerebras positions its hardware for both training and inference workloads, with claimed industry-leading speeds stemming from on-chip interconnect bandwidth and elimination of multi-chip communication latency. The architecture trades traditional data center modularity for integrated performance - relevant for workloads bottlenecked by cross-device synchronization or where cost-per-inference and tail latency matter more than incremental horizontal scaling. Development infrastructure includes C++, Python, Go, and Zig across the stack, with CI/CD through GitHub Actions and Jenkins.

135 jobs
RU

Runpod

RunPod operates an end-to-end AI infrastructure platform focused on GPU compute provisioning for model training, inference, and distributed agent orchestration. The platform serves over 500,000 developers, spanning solo practitioners to enterprise teams deploying at scale. Core infrastructure handles compute allocation, orchestration complexity, and operational overhead - positioning itself as accessible infrastructure rather than requiring deep systems expertise from users. The technical stack centers on Go, Python, and TypeScript with containerization through Docker and Kubernetes orchestration on Linux. Engineering domains span distributed systems, GPU compute scheduling, and developer tooling designed to abstract provisioning and scaling mechanics. The company emphasizes reducing operational friction: developers interact with compute resources without managing underlying cluster complexity or infrastructure provisioning bottlenecks. RunPod maintains a remote-first structure with team distribution across the U.S., Canada, Europe, and India. The platform's design reflects a systems-first approach to making GPU compute economically viable and operationally manageable - targeting workloads where cost, reliability, and time-to-deployment constrain AI development cycles.

26 jobs
XT

Xaira Therapeutics

Xaira Therapeutics is an integrated biotechnology company founded in 2023 that combines AI model development, large-scale biological data generation, and therapeutic product development under one organization. Built on protein design research from the University of Washington's Institute for Protein Design and Dr. David Baker's work, the company raised over $1 billion before emerging from stealth in 2024. Co-founded and incubated by ARCH Venture Partners and Foresite Labs, Xaira operates across three locations: South San Francisco, Seattle, and London. The technical infrastructure spans protein design, predictive patient stratification, and drug discovery systems. The stack includes Python, C++, PyTorch, and Jax for modeling, with distributed training approaches using DDP and FSDP. Experimental infrastructure includes Cytiva AKTA and Agilent HPLC systems for protein characterization and purification workflows. The company's approach integrates computational predictions with wet-lab data generation at scale, attempting to shift drug discovery from empirical methods toward engineered precision. Technical domains span AI model development for biological systems, protein engineering, therapeutic design, and patient selection algorithms. The organization is led by David Baker and Marc Tessier-Lavigne, combining academic protein design expertise with pharmaceutical development experience. The integration of model training, experimental validation loops, and therapeutic development pipeline represents a vertically integrated structure where data generation, model iteration, and product development occur within the same organization.

24 jobs
CD

Chai Discovery

Chai Discovery builds frontier AI foundation models to predict and reprogram biochemical molecular interactions, transforming drug discovery from empirical screening of billions of candidate sequences into deterministic computational design. The company's platform achieves over 85% success rates in designing molecules that meet drug-like properties - a fundamental shift from traditional approaches that require years of wet-lab iteration and billions in capital. Founded by researchers who co-invented protein language modeling and built state-of-the-art folding algorithms, Chai Discovery has shipped Chai-1 and Chai-2, breakthrough models for computational molecular design now deployed in production pharmaceutical workflows. The technical stack spans protein language modeling, protein folding algorithms, computational antibody design, and molecular interaction prediction. The platform handles previously undruggable targets, including GPCR agonist design with minimal experimental screening - a capability that addresses targets accounting for roughly 30% of marketed drugs but historically requiring extensive trial-and-error optimization. Design precision operates at atomic resolution, enabling drug-like antibody engineering with explicit control over molecular properties rather than stochastic library screening. Chai Discovery is backed by OpenAI, Thrive Capital, and General Catalyst, and maintains active partnerships with pharmaceutical companies including Eli Lilly. The company operates from the US under CEO Joshua Meier, deploying models that compress multi-year discovery timelines into computational workflows. For engineers, the inference challenge involves running large-scale protein structure prediction and molecular design models in production environments where latency and throughput directly gate pharmaceutical R&D cycles, with evaluation rigor defined by experimental validation rates rather than benchmark metrics.

14 jobs
OP

OpenEvidence

OpenEvidence operates a HIPAA-compliant medical information platform that handles over 100 million AI-powered clinical consultations from U.S. doctors and frontline clinicians. The system functions as a natural language search and retrieval layer over medical literature, synthesizing evidence from trusted sources to deliver point-of-care clinical decision support. With more than 40% of U.S. physicians logging in daily, the platform addresses a core bottleneck in clinical workflows: the exponential growth of medical knowledge against fixed physician time budgets. The system surfaces relevant evidence in seconds rather than the hours traditional literature review requires. The technical architecture supports evidence synthesis across landmark medical literature, aggregating content through clinical partnerships while maintaining compliance constraints required for healthcare settings. The platform serves as a knowledge management system that operates across practice environments - from academic medical centers to rural clinics - suggesting infrastructure designed for variable network conditions and diverse deployment contexts. Free access for verified U.S. healthcare professionals indicates a distribution model optimized for maximum clinician adoption rather than per-seat pricing common in enterprise healthcare software. Core technical domains span clinical decision support, AI copilot functionality for clinicians, and content aggregation from medical literature sources. The system's reliability requirements are elevated given its role in clinical decision pathways affecting patient outcomes, demanding careful evaluation of failure modes where incorrect or incomplete evidence synthesis could influence treatment decisions.

9 jobs
ZU

Zuma

Zuma builds agentic AI systems for multifamily property management, operating at scale across thousands of apartment communities serving millions of residents. The system handles lead engagement, tour scheduling, and rent collections - repetitive operations work that creates bottlenecks for onsite teams - while maintaining human oversight for relationship-critical interactions. The architecture is designed for human-AI collaboration rather than full automation: AI agents process high-volume, structured tasks while property managers handle hospitality and community engagement where judgment and relational context matter. The technical approach emphasizes rapid iteration driven by field feedback from property managers. Engineers and designers work directly with operations teams to identify latency and reliability requirements in production environments - tour scheduling conflicts, communication failure modes during collections, lead response time sensitivity. This operational integration surfaces real constraints: property management workflows involve variable tenant needs, time-sensitive coordination, and edge cases where escalation to human judgment is the correct trade-off. The system is designed to amplify existing teams by removing operational overhead rather than replacing domain expertise. Venture-backed by Andreessen Horowitz and Y Combinator, headquartered in Santa Monica. The company ships product rapidly, prioritizing deployment feedback over extended development cycles. Technical domains span agentic AI implementation, human-AI collaboration interfaces, and operations integration - work that requires understanding both inference system design and the operational complexity of residential property management at scale.

9 jobs