About

EliseAI builds a unified conversational AI platform for property management and healthcare operations, automating workflows that span leasing tours, maintenance requests, patient scheduling, and intake forms. Founded in 2017, the company serves over 600 property owners and healthcare operators managing 5 million+ units, having raised $360 million in funding. The engineering organization ships 175+ new features per year, reflecting a rapid iteration cycle informed by frontline user feedback.

The platform consolidates functionality that would otherwise require multiple point solutions, addressing operational bottlenecks in high-volume, repetitive administrative tasks. In property management, this includes conversational AI for leasing tour coordination and maintenance request handling. In healthcare, the system automates patient scheduling and intake form collection. The technical approach centers on a single platform architecture rather than a collection of disconnected tools, with production deployment at scale across both industry verticals.

The company's engineering culture emphasizes shipping velocity and product development driven by operational constraints observed in production environments. The 175+ annual feature releases suggest continuous deployment practices and tight feedback loops between product iteration and user-facing workflows. Development priorities appear structured around reducing latency in administrative operations and improving throughput for organizations managing thousands of concurrent interactions across property portfolios or patient populations.

Open roles at EliseAI

Explore 101 open positions at EliseAI and find your next opportunity.

EL

Senior Software Engineer

EliseAI

San Francisco, California, United States (On-site)

$240K – $300K Yearly3mo ago

Similar companies

EV

Eve

Eve builds AI-native infrastructure for plaintiff law firms, operating as an intelligent case assistant platform that manages litigation workflows from intake through resolution. The system processes more than 200,000 legal cases annually, handling case evaluation, medical chronology generation, demand letter drafting, and discovery responses. Developed in collaboration with OpenAI and Anthropic, the platform learns each firm's tone and style to generate documents that match attorney output, with attorneys able to train and teach the system for their specific practice patterns. The platform targets labor and employment practices and personal injury firms. Client firms report 250% year-over-year revenue growth and 2.5X case capacity increases without additional headcount, though these are self-reported outcomes rather than platform-wide guarantees. Eve claims to be the first legal AI to achieve SOC II Type 2 certification while maintaining HIPAA compliance, addressing the compliance and security requirements of handling protected health information and sensitive legal data at scale. The technical challenge set involves natural language processing for document generation, AI workflow development that adapts to individual firm processes, and maintaining enterprise-grade security infrastructure. The platform must handle the operational complexity of legal document generation across varied practice areas while meeting regulatory requirements for data handling in the legal and healthcare domains.

39 jobs
FU

FurtherAI

FurtherAI builds domain-specific AI infrastructure for commercial insurance workflows, targeting the document-heavy operational bottlenecks that dominate underwriting, claims processing, and policy comparison work. Their AI Workspace handles submission intake, underwriting audits, and compliance checks by parsing and normalizing unstructured data from broker letters, property schedules, Accord forms, and loss histories. The system reports 95–97% accuracy on these tasks compared to 70–77% for manual processing, addressing a workflow layer where precision directly impacts underwriting decisions and operational throughput. The platform is deployed by insurers, reinsurers, MGAs, and brokers writing over $15B in premiums across all 50 U.S. states. Technical focus areas include document understanding, NLP for insurance-specific language and formats, data normalization pipelines, and workflow automation that integrates with existing carrier systems. The core technical challenge is reliability at scale across heterogeneous document types and insurance product lines, where edge cases in policy language or submission format can propagate downstream into underwriting errors or compliance gaps. FurtherAI operates in a sector facing projected workforce reduction of 400,000 by 2026, with approximately 3 million insurance professionals currently handling manual document processing. The system architecture must handle the latency requirements of underwriting timelines while maintaining accuracy thresholds that meet regulatory and risk management standards. Key operational trade-offs include throughput on batch processing of submissions versus real-time responsiveness for urgent underwriting decisions, and the cost-accuracy frontier for document parsing models across different insurance product complexities.

18 jobs
OP

OpenRouter

OpenRouter operates a unified API gateway that aggregates 300+ large language models from 60+ providers into a single interface, processing over 100 trillion tokens annually for more than 5 million developers. Founded in 2023 by Alex Atallah and backed by $40M Series A funding from Andreessen Horowitz, Menlo Ventures, and Sequoia Capital, the platform addresses multi-provider infrastructure complexity through intelligent routing, automatic failover, and consolidated billing across models from Anthropic, OpenAI, Google, Meta, and dozens of other providers. The technical architecture prioritizes reliability and operational flexibility through automatic fallbacks between providers, response healing for malformed JSON outputs, and customizable data policies. The platform standardizes access across heterogeneous model APIs while maintaining transparent per-token pricing without subscription tiers. Public usage rankings provide visibility into model performance patterns across the user base. OpenRouter's infrastructure handles workloads ranging from individual developer projects to enterprise-scale deployments, with completion insurance and routing logic designed to mitigate single-provider outages and rate limiting. The platform's tech stack includes React, Next.js, TypeScript, and Cloudflare Workers for edge deployment. Core operational focus centers on eliminating vendor lock-in while maintaining production-grade uptime across a rapidly expanding model catalog.

8 jobs
SA

SambaNova

SambaNova builds a full-stack AI inference platform centered on custom dataflow chips (RDUs) and a three-tier memory architecture designed to address latency and energy efficiency bottlenecks in generative AI deployment. The architecture targets enterprise and government workloads requiring on-premises or sovereign deployment - fine-tuning open-source models behind customer firewalls with full data and model ownership retention. The platform powers sovereign AI data centers across Australia, Europe, and the UK, focusing on avoiding vendor lock-in to proprietary inference services. The technical approach uses custom dataflow technology rather than GPU-based architectures, trading off ecosystem maturity for claimed improvements in inference throughput and energy consumption at scale. The three-tier memory design addresses memory bandwidth constraints common in transformer inference. The platform supports PyTorch-based model fine-tuning and deployment workflows, with integration points through Python and C++ APIs. Operational complexity centers on full-stack ownership - hardware, software, and deployment infrastructure - requiring coordination across chip design, systems software, and model serving layers. The stack includes standard ML tooling (PyTorch, Python) alongside proprietary components for the RDU runtime and memory management. Build and CI infrastructure uses Bazel and CircleCI; artifact management through Google Artifact Registry and JFrog. The deployment model targets enterprises prioritizing data sovereignty over cloud-based inference APIs, introducing trade-offs in operational overhead versus control and latency predictability for on-premises workloads.

7 jobs