1. Home
  2. AI Companies
  3. FurtherAI
FU

FurtherAI

About

FurtherAI builds domain-specific AI infrastructure for commercial insurance workflows, targeting the document-heavy operational bottlenecks that dominate underwriting, claims processing, and policy comparison work. Their AI Workspace handles submission intake, underwriting audits, and compliance checks by parsing and normalizing unstructured data from broker letters, property schedules, Accord forms, and loss histories. The system reports 95–97% accuracy on these tasks compared to 70–77% for manual processing, addressing a workflow layer where precision directly impacts underwriting decisions and operational throughput.

The platform is deployed by insurers, reinsurers, MGAs, and brokers writing over $15B in premiums across all 50 U.S. states. Technical focus areas include document understanding, NLP for insurance-specific language and formats, data normalization pipelines, and workflow automation that integrates with existing carrier systems. The core technical challenge is reliability at scale across heterogeneous document types and insurance product lines, where edge cases in policy language or submission format can propagate downstream into underwriting errors or compliance gaps.

FurtherAI operates in a sector facing projected workforce reduction of 400,000 by 2026, with approximately 3 million insurance professionals currently handling manual document processing. The system architecture must handle the latency requirements of underwriting timelines while maintaining accuracy thresholds that meet regulatory and risk management standards. Key operational trade-offs include throughput on batch processing of submissions versus real-time responsiveness for urgent underwriting decisions, and the cost-accuracy frontier for document parsing models across different insurance product complexities.

Open roles at FurtherAI

Explore 15 open positions at FurtherAI and find your next opportunity.

FU

Software/AI Engineer (New Grad)

FurtherAI

San Francisco, California, United States (On-site)

$125K – $165K Yearly3w ago
FU

Insurance Engineer

FurtherAI

San Francisco, California, United States (On-site)

$100K – $200K Yearly3w ago
FU

Sales Development Representative

FurtherAI

San Francisco, California, United States (On-site)

$65K – $90K Yearly1mo ago
FU

AI Engineer - Agent Team

FurtherAI

San Francisco, California, United States (On-site)

$150K – $250K Yearly2mo ago
FU

Engagement Manager

FurtherAI

San Francisco, California, United States (On-site)

$180K – $250K Yearly2mo ago
FU

Senior AI Engineer - Agent Team

FurtherAI

San Francisco, California, United States (On-site)

$225K – $300K Yearly2mo ago
FU

Founding Product Designer

FurtherAI

San Francisco, California, United States (On-site)

$150K – $220K Yearly3mo ago
FU

Founding Recruiter

FurtherAI

San Francisco, California, United States (On-site)

$140K – $175K Yearly3mo ago
FU

Solutions Engineer

FurtherAI

San Francisco, California, United States (On-site)

$155K – $180K Yearly3mo ago
FU

Social Media Specialist

FurtherAI

San Francisco, California, United States (On-site)

$80K – $120K Yearly3mo ago
FU

Senior Software Engineer - Backend

FurtherAI

San Francisco, California, United States (On-site)

$150K – $250K Yearly3mo ago
FU

Marketing Generalist

FurtherAI

San Francisco, California, United States (On-site)

$90K – $120K Yearly3mo ago
FU

Software Engineer - Backend

FurtherAI

San Francisco, California, United States (On-site)

$150K – $250K Yearly3mo ago
FU

Software Engineer - Backend/Fullstack [India]

FurtherAI

India or Remote (India)

₹4M – ₹8M Yearly3mo ago
FU

Enterprise Account Executive

FurtherAI

San Francisco, California, United States (On-site)

$300K – $450K Yearly3mo ago

Similar companies

NE

Nebius

Nebius is a Nasdaq-listed technology company (NBIS) building full-stack AI infrastructure from its Amsterdam headquarters, with GPU clusters deployed across Europe and the United States. Led by CEO Arkady Volozh, the company operates AI-optimized sustainable data centers - including a facility 60 kilometers from Helsinki and a new Vineland, New Jersey site - and has raised significant capital ($700 million from investors including Accel, NVIDIA, and Orbis). The engineering organization, numbering in the hundreds, maintains deep expertise in world-class infrastructure and runs an in-house AI R&D team that dogfoods the platform to validate it against production ML practitioner requirements. The infrastructure stack spans hyperscaler-scale features with supercomputer-grade performance characteristics. ISEG, Nebius's supercomputer, ranks among the world's most powerful systems. The platform integrates NVIDIA GPUs with NVIDIA InfiniBand networking, exposing workload orchestration through both Kubernetes and Slurm. The operational layer includes standard observability (Prometheus, Grafana), data infrastructure (PostgreSQL, Apache Spark), and ML tooling (MLflow, vLLM, Triton, Ray), with infrastructure-as-code managed via Terraform. This architecture targets the latency, throughput, and reliability requirements of AI training and inference workloads at scale. The company has secured a multi-billion dollar agreement with Microsoft to deliver dedicated AI infrastructure from its Vineland data center. Nebius serves startups, research institutes, and enterprises across healthcare and life sciences, robotics, finance, and entertainment verticals. The technical approach emphasizes production-grade infrastructure that handles the operational complexity of large-scale AI deployments - managing GPU utilization, network bottlenecks, and the cost-performance trade-offs inherent in serving diverse AI workloads from model training through inference serving.

477 jobs
EL

EliseAI

EliseAI builds a unified conversational AI platform for property management and healthcare operations, automating workflows that span leasing tours, maintenance requests, patient scheduling, and intake forms. Founded in 2017, the company serves over 600 property owners and healthcare operators managing 5 million+ units, having raised $360 million in funding. The engineering organization ships 175+ new features per year, reflecting a rapid iteration cycle informed by frontline user feedback. The platform consolidates functionality that would otherwise require multiple point solutions, addressing operational bottlenecks in high-volume, repetitive administrative tasks. In property management, this includes conversational AI for leasing tour coordination and maintenance request handling. In healthcare, the system automates patient scheduling and intake form collection. The technical approach centers on a single platform architecture rather than a collection of disconnected tools, with production deployment at scale across both industry verticals. The company's engineering culture emphasizes shipping velocity and product development driven by operational constraints observed in production environments. The 175+ annual feature releases suggest continuous deployment practices and tight feedback loops between product iteration and user-facing workflows. Development priorities appear structured around reducing latency in administrative operations and improving throughput for organizations managing thousands of concurrent interactions across property portfolios or patient populations.

113 jobs
DE

Decagon

Decagon builds a conversational AI platform designed to replace or augment legacy customer support systems by deploying intelligent AI agents across chat, email, and voice channels. The company positions its technology as infrastructure for delivering concierge-level customer experiences at scale, targeting brands looking to support, onboard, and retain customers without proportional headcount growth. Led by CEO Jesse Zhang and founded by serial entrepreneurs, Decagon operates from the US and focuses on addressing the operational constraints of traditional customer support systems. The platform's core technical approach centers on Agent Operating Procedures (AOPs), a natural-language-to-code compilation system that allows non-technical users to define agent behavior while preserving technical team control over guardrails, integrations, and versioning. This design addresses a common trade-off in AI tooling: enabling rapid iteration by domain experts without sacrificing reliability controls or introducing configuration drift. The agent orchestration layer spans multiple channels and claims to amplify CX team impact by 10x, though specific benchmarks around latency, accuracy, or failure rate are not publicly detailed. Decagon's technical domains span conversational AI, natural language processing, multichannel messaging infrastructure, and automation systems. The platform emphasizes runtime guardrails and version management as first-class concerns, reflecting a systems-oriented approach to production deployment. The company claims to deliver always-on, personalized service, positioning its agents as operational infrastructure rather than experimental tooling. For engineers evaluating opportunities, the technical challenges likely involve scaling context-rich, stateful interactions across channels while maintaining consistency, handling edge cases in natural language understanding, and building abstraction layers that balance expressiveness with safety.

89 jobs
QD

Qdrant

Qdrant is a Rust-based vector database designed for high-dimensional similarity search at scale, serving semantic search, recommendation systems, and retrieval-augmented generation workloads. The system has processed billions of vectors across production deployments, with adoption reflected in 10 million+ downloads and 23,000 GitHub stars. The architecture trades language-level memory safety and zero-cost abstractions for predictable performance characteristics under load, operating both as an open-source deployment target and a managed cloud service. The database handles multi-modal retrieval and real-time recommendation workloads for enterprises including HubSpot, Bayer, Bosch, and CB Insights, spanning e-commerce through healthcare verticals. The managed offering positions deployment time as a primary bottleneck reducer, though actual production reliability depends on vector dimensionality, query patterns, and infrastructure topology. The team of 75+ distributed across 20+ countries maintains both the core engine and cloud operations, with the stack including gRPC for service boundaries, Kubernetes for orchestration, and observability through Prometheus/Grafana/OpenTelemetry. Founded in 2021 by André Zayarni and Andrey Vasnetsov, the company operates a dual open-source and managed cloud business model. The technical focus centers on scalability trade-offs in nearest neighbor search - balancing index structure overhead, query latency distribution, and write throughput as vector counts scale. Deployment options span AWS, GCP, and Azure, with Terraform for infrastructure provisioning and Docker for containerization.

27 jobs