GR

Granola

About

This company hasn't shared a description yet.

Similar companies

DE

Decagon

Decagon builds a conversational AI platform designed to replace or augment legacy customer support systems by deploying intelligent AI agents across chat, email, and voice channels. The company positions its technology as infrastructure for delivering concierge-level customer experiences at scale, targeting brands looking to support, onboard, and retain customers without proportional headcount growth. Led by CEO Jesse Zhang and founded by serial entrepreneurs, Decagon operates from the US and focuses on addressing the operational constraints of traditional customer support systems. The platform's core technical approach centers on Agent Operating Procedures (AOPs), a natural-language-to-code compilation system that allows non-technical users to define agent behavior while preserving technical team control over guardrails, integrations, and versioning. This design addresses a common trade-off in AI tooling: enabling rapid iteration by domain experts without sacrificing reliability controls or introducing configuration drift. The agent orchestration layer spans multiple channels and claims to amplify CX team impact by 10x, though specific benchmarks around latency, accuracy, or failure rate are not publicly detailed. Decagon's technical domains span conversational AI, natural language processing, multichannel messaging infrastructure, and automation systems. The platform emphasizes runtime guardrails and version management as first-class concerns, reflecting a systems-oriented approach to production deployment. The company claims to deliver always-on, personalized service, positioning its agents as operational infrastructure rather than experimental tooling. For engineers evaluating opportunities, the technical challenges likely involve scaling context-rich, stateful interactions across channels while maintaining consistency, handling edge cases in natural language understanding, and building abstraction layers that balance expressiveness with safety.

89 jobs
CA

Cartesia

Cartesia builds real-time multimodal AI models for voice applications, with production systems spanning text-to-speech and speech-to-text. The company emerged from Stanford's AI Lab, where the founding team - led by CEO Karan Goel - pioneered work on State Space Models (SSMs) before transitioning to commercial infrastructure. Their technical approach combines model innovation with systems engineering, focusing on the latency, throughput, and operational constraints that define production voice AI. The core product line includes Sonic, a text-to-speech model designed for emotive, human-like output, and Ink, a recently launched speech-to-text system purpose-built for real-time voice applications. Both systems address the fundamental trade-offs in voice AI: achieving low-latency inference while maintaining quality at scale. The company's technical domains span foundation model development, real-time multimodal intelligence, and developer tooling - infrastructure that runs where users are rather than requiring server-side processing. Cartesia's engineering stack runs on Python, Go, and TypeScript, supporting developers building voice interfaces that demand sub-second response times and reliable performance under production load. The team's research background in SSMs informs their approach to model efficiency and scalability, though the company now focuses on shipping production systems rather than pure research. Their stated mission centers on ubiquitous, interactive intelligence - systems that handle the operational complexity of real-time voice while remaining accessible to developers building conversational interfaces.

30 jobs
MO

Modal

Modal operates a serverless compute platform designed to minimize infrastructure friction for ML inference, fine-tuning, and batch workloads. The platform provides instant GPU access with usage-based pricing, targeting teams that need to ship compute-intensive applications without managing scheduling, container orchestration, or resource allocation. The architecture is built on custom infrastructure components - an in-house file system, container runtime, scheduler, and image builder - optimized for the latency and throughput characteristics of AI workloads. The technical stack spans Python, Rust, and Go at the systems level, with PyTorch, CUDA, vLLM, and TensorRT support for ML frameworks. This reflects prioritization of both developer ergonomics (Python interface) and low-level performance (Rust/Go for runtime components). The custom infrastructure signals investment in controlling the full vertical - from container initialization through GPU scheduling - rather than composing existing orchestration layers. The team operates across New York, Stockholm, and San Francisco, and includes creators of open-source projects like Seaborn and Luigi, alongside academic researchers and engineers with experience building production systems. The platform positions itself around developer experience as a core constraint, with infrastructure complexity abstracted to reduce operational overhead for data and AI teams.

28 jobs