About

Decagon builds a conversational AI platform designed to replace or augment legacy customer support systems by deploying intelligent AI agents across chat, email, and voice channels. The company positions its technology as infrastructure for delivering concierge-level customer experiences at scale, targeting brands looking to support, onboard, and retain customers without proportional headcount growth. Led by CEO Jesse Zhang and founded by serial entrepreneurs, Decagon operates from the US and focuses on addressing the operational constraints of traditional customer support systems.

The platform's core technical approach centers on Agent Operating Procedures (AOPs), a natural-language-to-code compilation system that allows non-technical users to define agent behavior while preserving technical team control over guardrails, integrations, and versioning. This design addresses a common trade-off in AI tooling: enabling rapid iteration by domain experts without sacrificing reliability controls or introducing configuration drift. The agent orchestration layer spans multiple channels and claims to amplify CX team impact by 10x, though specific benchmarks around latency, accuracy, or failure rate are not publicly detailed.

Decagon's technical domains span conversational AI, natural language processing, multichannel messaging infrastructure, and automation systems. The platform emphasizes runtime guardrails and version management as first-class concerns, reflecting a systems-oriented approach to production deployment. The company claims to deliver always-on, personalized service, positioning its agents as operational infrastructure rather than experimental tooling. For engineers evaluating opportunities, the technical challenges likely involve scaling context-rich, stateful interactions across channels while maintaining consistency, handling edge cases in natural language understanding, and building abstraction layers that balance expressiveness with safety.

Open roles at Decagon

Explore 75 open positions at Decagon and find your next opportunity.

DE

Strategic Solutions Engineer, East

Decagon

United States (On-site)

$240K – $280K Yearly3w ago
DE

Director of Solutions Engineering, Enterprise East

Decagon

New York, United States (On-site)

$290K – $330K Yearly3w ago
DE

Private Equity Lead, Founder's Office

Decagon

San Francisco, California, United States (On-site)

$220K – $280K Yearly3w ago
DE

Strategic Solutions Engineer, West

Decagon

United States (Hybrid)

$240K – $280K Yearly3w ago
DE

Director of Solutions Engineering, Enterprise West

Decagon

San Francisco, California, United States (On-site)

$290K – $330K Yearly3w ago
DE

RSD, Strategic Sales

Decagon

San Francisco, California, United States (On-site)

$400K – $500K Yearly3w ago
DE

Strategic Growth Lead, Founder's Office

Decagon

San Francisco, California, United States (On-site)

$220K – $280K Yearly3w ago
DE

Senior Manager, IT

Decagon

San Francisco, California, United States (On-site)

$200K – $250K Yearly1mo ago
DE

Staff Software Engineer, Agent Development

Decagon

San Francisco, California, United States (On-site)

$300K – $430K Yearly1mo ago
DE

Senior Software Engineer, Agent Development

Decagon

San Francisco, California, United States (On-site)

$250K – $330K Yearly1mo ago
DE

Strategic Finance Manager

Decagon

San Francisco, California, United States (On-site)

$180K – $225K Yearly1mo ago
DE

Product Manager, Research

Decagon

San Francisco, California, United States (On-site)

$200K – $285K Yearly1mo ago
DE

Associate Solutions Engineer

Decagon

San Francisco, California, United States (On-site)

$100K – $130K Yearly1mo ago
DE

Procurement Manager

Decagon

San Francisco, California, United States (On-site)

$175K – $210K Yearly1mo ago
DE

Senior Software Engineer, Agent Orchestration

Decagon

New York, United States (On-site)

$250K – $330K Yearly2mo ago
DE

Staff Software Engineer, ML Infrastructure

Decagon

San Francisco, California, United States (On-site)

$300K – $430K Yearly2mo ago
DE

Staff Software Engineer, Agent Orchestration

Decagon

New York, United States (On-site)

$300K – $430K Yearly2mo ago
DE

Senior Data Scientist

Decagon

San Francisco, California, United States (On-site)

$250K – $300K Yearly2mo ago
DE

Implementation Manager

Decagon

London, England, United Kingdom (On-site)

£105K – £140K Yearly2mo ago
DE

Director of Data & Analytics

Decagon

San Francisco, California, United States (On-site)

$230K – $300K Yearly2mo ago

Similar companies

NE

Nebius

Nebius is a Nasdaq-listed technology company (NBIS) building full-stack AI infrastructure from its Amsterdam headquarters, with GPU clusters deployed across Europe and the United States. Led by CEO Arkady Volozh, the company operates AI-optimized sustainable data centers - including a facility 60 kilometers from Helsinki and a new Vineland, New Jersey site - and has raised significant capital ($700 million from investors including Accel, NVIDIA, and Orbis). The engineering organization, numbering in the hundreds, maintains deep expertise in world-class infrastructure and runs an in-house AI R&D team that dogfoods the platform to validate it against production ML practitioner requirements. The infrastructure stack spans hyperscaler-scale features with supercomputer-grade performance characteristics. ISEG, Nebius's supercomputer, ranks among the world's most powerful systems. The platform integrates NVIDIA GPUs with NVIDIA InfiniBand networking, exposing workload orchestration through both Kubernetes and Slurm. The operational layer includes standard observability (Prometheus, Grafana), data infrastructure (PostgreSQL, Apache Spark), and ML tooling (MLflow, vLLM, Triton, Ray), with infrastructure-as-code managed via Terraform. This architecture targets the latency, throughput, and reliability requirements of AI training and inference workloads at scale. The company has secured a multi-billion dollar agreement with Microsoft to deliver dedicated AI infrastructure from its Vineland data center. Nebius serves startups, research institutes, and enterprises across healthcare and life sciences, robotics, finance, and entertainment verticals. The technical approach emphasizes production-grade infrastructure that handles the operational complexity of large-scale AI deployments - managing GPU utilization, network bottlenecks, and the cost-performance trade-offs inherent in serving diverse AI workloads from model training through inference serving.

477 jobs
QD

Qdrant

Qdrant is a Rust-based vector database designed for high-dimensional similarity search at scale, serving semantic search, recommendation systems, and retrieval-augmented generation workloads. The system has processed billions of vectors across production deployments, with adoption reflected in 10 million+ downloads and 23,000 GitHub stars. The architecture trades language-level memory safety and zero-cost abstractions for predictable performance characteristics under load, operating both as an open-source deployment target and a managed cloud service. The database handles multi-modal retrieval and real-time recommendation workloads for enterprises including HubSpot, Bayer, Bosch, and CB Insights, spanning e-commerce through healthcare verticals. The managed offering positions deployment time as a primary bottleneck reducer, though actual production reliability depends on vector dimensionality, query patterns, and infrastructure topology. The team of 75+ distributed across 20+ countries maintains both the core engine and cloud operations, with the stack including gRPC for service boundaries, Kubernetes for orchestration, and observability through Prometheus/Grafana/OpenTelemetry. Founded in 2021 by André Zayarni and Andrey Vasnetsov, the company operates a dual open-source and managed cloud business model. The technical focus centers on scalability trade-offs in nearest neighbor search - balancing index structure overhead, query latency distribution, and write throughput as vector counts scale. Deployment options span AWS, GCP, and Azure, with Terraform for infrastructure provisioning and Docker for containerization.

27 jobs
FU

FurtherAI

FurtherAI builds domain-specific AI infrastructure for commercial insurance workflows, targeting the document-heavy operational bottlenecks that dominate underwriting, claims processing, and policy comparison work. Their AI Workspace handles submission intake, underwriting audits, and compliance checks by parsing and normalizing unstructured data from broker letters, property schedules, Accord forms, and loss histories. The system reports 95–97% accuracy on these tasks compared to 70–77% for manual processing, addressing a workflow layer where precision directly impacts underwriting decisions and operational throughput. The platform is deployed by insurers, reinsurers, MGAs, and brokers writing over $15B in premiums across all 50 U.S. states. Technical focus areas include document understanding, NLP for insurance-specific language and formats, data normalization pipelines, and workflow automation that integrates with existing carrier systems. The core technical challenge is reliability at scale across heterogeneous document types and insurance product lines, where edge cases in policy language or submission format can propagate downstream into underwriting errors or compliance gaps. FurtherAI operates in a sector facing projected workforce reduction of 400,000 by 2026, with approximately 3 million insurance professionals currently handling manual document processing. The system architecture must handle the latency requirements of underwriting timelines while maintaining accuracy thresholds that meet regulatory and risk management standards. Key operational trade-offs include throughput on batch processing of submissions versus real-time responsiveness for urgent underwriting decisions, and the cost-accuracy frontier for document parsing models across different insurance product complexities.

18 jobs