AD

About

Ada operates an omnichannel AI platform for enterprise customer service automation, processing interactions across chat, voice, email, and social channels. The platform has powered 5.5 billion interactions since 2016, with reported automation resolution rates at 83% of customer conversations. The architecture sits on AWS infrastructure using Python, JavaScript/TypeScript, and React for core platform components, with data handling through Redshift, MongoDB, Elasticsearch, and ClickHouse. Message processing runs through RabbitMQ, with Terraform managing infrastructure as code. The platform integrates with enterprise systems including Jira, Zendesk, and Salesforce.

Founded in 2016 and backed by over $250M in funding from Accel, Bessemer, FirstMark, Spark, and Version One Ventures, Ada positions itself as providing both technology and transformation services to accelerate enterprise AI maturity. Customer deployments span fintech (Square), consumer goods (YETI), and SaaS (Monday.com) verticals. The company's agent management technology focuses on deployment optimization and performance improvement for AI-powered service automation at scale.

The technical approach emphasizes omnichannel consistency and enterprise integration requirements. Scale metrics reference millions of hours saved through automation, though these figures represent customer-reported outcomes rather than independently verified benchmarks. The platform operates as a managed service with strategic consulting components aimed at enterprise adoption patterns and operational transformation beyond pure technical deployment.

Open roles at ada

Explore 13 open positions at ada and find your next opportunity.

AD

Senior Product Designer, AI Manager Experience

ada

Canada + 1 more (Remote)

C$135K – C$160K Yearly3d ago
AD

Lead, GTM Enablement

ada

Canada (Remote)

C$135.1K – C$172.8K Yearly2w ago
AD

Technical Support Advisor

ada

Central Singapore Community Development Council, SG or Remote (Singapore)

2w ago
AD

Presales Consultant

ada

British Columbia, Canada (Remote)

C$120K – C$160K Yearly3w ago
AD

Sales Development Representative

ada

Toronto, Ontario, Canada (Hybrid)

4w ago
AD

Customer Solutions Consultant II

ada

United Kingdom (Remote)

4w ago
AD

Senior Product Manager

ada

Canada (Remote)

C$160K – C$200K Yearly1mo ago
AD

Senior Cloud Security Engineer

ada

Canada (Remote)

C$120K – C$150K Yearly1mo ago
AD

Senior Data Analyst

ada

Canada (Remote)

C$110K – C$130K Yearly2mo ago
AD

Similar companies

CO

Cohere

Cohere builds enterprise-focused foundational models designed for production deployment with emphasis on security, privacy, and operational trust. Founded in 2019 in Toronto, the company has raised nearly $1 billion and scaled to hundreds of employees worldwide. The technical focus spans semantic search, content generation, and customer experience applications - domains where model reliability and data governance are non-negotiable constraints for enterprise adoption. The company's architecture decisions reflect production realities over research novelty. Models are architected for deployment into regulated environments where data residency, access controls, and audit trails matter as much as accuracy metrics. This positioning addresses the gap between frontier model capabilities and enterprise operational requirements: latency SLAs, cost predictability, and compliance frameworks that prevent many organizations from operationalizing public AI APIs. Cohere Labs has published over 100 papers and built a research community of 4,500+ researchers, signaling ongoing investment in foundational work rather than pure application-layer focus. The team composition skews heavily toward researchers and engineers from academic backgrounds, which maps to the technical challenge space - building models that balance performance, safety constraints, and deployment flexibility across varied enterprise infrastructure.

106 jobs
TA

Together AI

Together AI operates a purpose-built GPU cloud platform for training, fine-tuning, and deploying generative AI models. The infrastructure is designed without vendor lock-in, serving developers and organizations that need to run open-source models at scale. The engineering work centers on distributed systems, model optimization, and AI infrastructure - areas where trade-offs between throughput, latency, and operational complexity define production viability. The company maintains active contributions to open-source projects including FlashAttention, Mamba, and RedPajama. Engineers and researchers work in close proximity, with new hires taking ownership of substantial technical challenges from the start. The tech stack spans PyTorch, CUDA, TensorRT, TensorRT-LLM, vLLM, SGLang, and TGI, reflecting the requirement to support multiple inference backends and optimization paths. Work involves designing distributed inference engines and developing model architectures where performance characteristics - memory bandwidth utilization, kernel fusion opportunities, multi-GPU coordination overhead - directly impact what models can run economically in production. Technical problems include optimizing inference for various model architectures across heterogeneous GPU clusters, managing the reliability and cost trade-offs in serving large language models, and building tooling that makes open-source AI accessible without sacrificing control over deployment parameters. The platform must handle the operational complexity of supporting diverse workloads: training runs with different parallelization strategies, fine-tuning jobs with varying dataset sizes, and inference deployments where tail latency matters.

83 jobs
PE

Perplexity

Perplexity operates an AI-powered answer engine processing over 150 million questions weekly across web, mobile, and enterprise platforms. Founded in 2022, the system combines real-time web search with multiple LLMs to deliver source-attributed answers. The architecture serves both consumer and enterprise workloads, with enterprise deployments requiring security guarantees for knowledge worker use cases including legal research partnerships with organizations like Latham & Watkins. The technical stack runs on AWS infrastructure with Terraform for provisioning, Python and Go for backend services, and PyTorch with DeepSpeed and FSDP for model training and inference. Data pipelines use dbt, SQL, Snowflake, and Databricks. Frontend implementations use React and TypeScript, with Docker containerization and Open Policy Agent for access control. This architecture must handle tail latency and throughput requirements for real-time search retrieval paired with LLM inference at consumer scale, while maintaining source credibility verification in the critical path. The engineering focus centers on information retrieval accuracy, model response quality, and citation reliability rather than advertising optimization. Production systems must balance inference cost against answer quality across multiple models, manage retrieval latency for real-time web indexing, and maintain reliability for both free-tier consumer traffic and enterprise SLA requirements. Pro tier monetization suggests capacity-based or model selection tiering rather than pure ad-based revenue.

76 jobs
LO

Lorikeet

Lorikeet builds AI agent infrastructure for customer support in complex, regulated environments where traditional chatbot deflection fails. The platform handles end-to-end resolution across voice, chat, and email channels, with agents that query backend systems, execute actions, and navigate multi-step decision trees rather than routing to self-service. The architecture is designed for flexibility in high-stakes scenarios - financial services disputes, healthcare eligibility determinations, compliance-sensitive transactions - where precision in data retrieval and action execution is non-negotiable. The system serves fintechs, healthtechs, crypto marketplaces, delivery platforms, and energy providers - sectors where support tickets frequently require cross-system lookups, stateful workflows, and regulatory constraint adherence. Lorikeet's positioning centers on solving tickets that involve both complex decision logic and the operational risk of incorrect system mutations, rather than optimizing for deflection rates or simple FAQ coverage. The platform's value proposition scales with ticket complexity: environments where support resolution requires chaining multiple API calls, interpreting policy logic, and maintaining audit trails under regulatory oversight. Headquartered in Australia with a global customer base, Lorikeet targets organizations where support automation must balance throughput gains against the operational and compliance costs of failure modes in agent reasoning or action execution. The technical challenge space includes latency management across synchronous system integrations, reliability in multi-turn voice interactions, and maintaining accuracy bounds when agents operate with partial information or ambiguous customer intent.

9 jobs
AL

AI21 Labs

AI21 Labs builds enterprise foundation models and orchestration systems designed for deployment under operational constraints: hallucination mitigation, air-gapped environments, long-context efficiency, and human-in-the-loop reliability. Founded in 2017 and backed by $336 million from NVIDIA, Google, and Intel, the company focuses on controllability and deployment flexibility over benchmarks optimized for consumer use cases. Infrastructure spans SaaS, hybrid cloud, and fully air-gapped configurations, addressing compliance and latency requirements in mission-critical workflows. The Jamba architecture is a hybrid SSM-Transformer model targeting long-context tasks, claiming 30% efficiency improvements over pure-Transformer approaches on context-heavy workloads - trade-offs center on memory bandwidth and kernel fusion vs. attention quality at scale. AI21 Maestro provides orchestration primitives for agentic systems, routing escalation to human operators when confidence thresholds are breached or task complexity exceeds model capacity - design emphasis on bounded reliability rather than full autonomy. Technical stack includes standard distributed training infrastructure (PyTorch, DeepSpeed, FSDP, Megatron) and inference optimization tooling (Triton, CUDA kernels). Deployment and serving layers run on Kubernetes with PostgreSQL, Redis, and vector stores (pgvector, Aurora, AlloyDB) for retrieval and state management. Engineering decisions appear driven by production failure modes - hallucination containment, latency tail management, and operational debuggability in regulated environments - rather than maximizing throughput on fixed benchmarks.

6 jobs
CL

Clarifai

Clarifai operates a full-stack AI platform spanning data preparation, model training, deployment, and monitoring across computer vision, NLP, and audio domains. The platform serves over 400,000 users across 170+ countries, delivering billions of predictions with access to more than 1 million models. Founded in 2013 by Matthew Zeiler after winning top five placements at ImageNet 2013, the company has raised $100 million in funding from Menlo Ventures, Union Square Ventures, NVIDIA, Google Ventures, and Qualcomm. Customers include Amazon, Siemens, NVIDIA, Canva, Vimeo, and OpenTable. The inference architecture supports orchestrated compute across AWS, GCP, and Azure, with edge deployment through Local Runners for on-premises and edge scenarios. The platform integrates PyTorch, TensorFlow, JAX, Nvidia Triton, and ONNX, with reported performance of 544 tokens per second on GPT-OSS-120B. Technical focus areas include image classification, video analysis, multimodal processing, and MLOps workflows. The stack runs on Python and Golang, with Kubeflow for pipeline orchestration. The company positions itself as enterprise- and developer-focused, addressing the full AI lifecycle from unstructured data ingestion through production monitoring. Forrester recognized Clarifai as a leader in its Computer Vision report. The platform's scope spans model training, inference orchestration, and operational deployment across cloud and edge environments, serving use cases in e-commerce, manufacturing, semiconductors, creative software, media, and hospitality verticals.

2 jobs