LO

About

Lorikeet builds AI agent infrastructure for customer support in complex, regulated environments where traditional chatbot deflection fails. The platform handles end-to-end resolution across voice, chat, and email channels, with agents that query backend systems, execute actions, and navigate multi-step decision trees rather than routing to self-service. The architecture is designed for flexibility in high-stakes scenarios - financial services disputes, healthcare eligibility determinations, compliance-sensitive transactions - where precision in data retrieval and action execution is non-negotiable.

The system serves fintechs, healthtechs, crypto marketplaces, delivery platforms, and energy providers - sectors where support tickets frequently require cross-system lookups, stateful workflows, and regulatory constraint adherence. Lorikeet's positioning centers on solving tickets that involve both complex decision logic and the operational risk of incorrect system mutations, rather than optimizing for deflection rates or simple FAQ coverage. The platform's value proposition scales with ticket complexity: environments where support resolution requires chaining multiple API calls, interpreting policy logic, and maintaining audit trails under regulatory oversight.

Headquartered in Australia with a global customer base, Lorikeet targets organizations where support automation must balance throughput gains against the operational and compliance costs of failure modes in agent reasoning or action execution. The technical challenge space includes latency management across synchronous system integrations, reliability in multi-turn voice interactions, and maintaining accuracy bounds when agents operate with partial information or ambiguous customer intent.

Open roles at Lorikeet

Explore 7 open positions at Lorikeet and find your next opportunity.

LO

Forward Deployed Product Manager

Lorikeet

London, England, United Kingdom (On-site)

6d ago
LO

Forward Deployed AI Engineer

Lorikeet

Sydney, New South Wales, Australia (On-site)

3w ago
LO

Senior Product Manager

Lorikeet

Sydney, New South Wales, Australia (On-site)

3w ago
LO

Senior Software Engineer

Lorikeet

Surry Hills, Sydney, New South Wales, Australia (On-site)

1mo ago
LO

Software Engineer

Lorikeet

Surry Hills, Sydney, New South Wales, Australia (On-site)

1mo ago
LO

Account Executive

Lorikeet

London, England, United Kingdom or Remote (United Kingdom)

3mo ago
LO

Account Executive

Lorikeet

United States or Remote (United States)

3mo ago

Similar companies

GR

Graphcore

Graphcore, a British semiconductor company and wholly owned subsidiary of SoftBank Group, develops specialized AI compute hardware centered on its Intelligence Processing Unit (IPU). The IPU represents a processor architecture specifically designed for machine intelligence workloads rather than general-purpose computing. The company built a complete AI compute stack spanning silicon design through datacenter infrastructure, including the Poplar software framework that sits atop the hardware. Graphcore brought the first Wafer-on-Wafer AI processor to market, a packaging approach that addresses the bandwidth and latency constraints inherent in traditional chip-to-chip interconnects for AI workloads. The technical scope encompasses semiconductor engineering, processor design, and AI-specific optimizations across both hardware and software layers. The engineering team works on silicon design, wafer-scale integration technology, and the development of tools for AI model optimization. The software stack includes developer tools designed to extract performance from the IPU architecture, with ongoing work to optimize popular AI models for the platform. This systems-level approach attempts to address the throughput and efficiency bottlenecks that emerge when running large-scale machine learning workloads on conventional processor architectures. Under CEO Nigel Toon's leadership, Graphcore operates with global presence and maintains teams of semiconductor, software, and AI specialists. The company's technology stack includes standard datacenter interfaces (PCIe, DDR, Ethernet) alongside proprietary elements like the IPU and Poplar software. The subsidiary structure under SoftBank provides backing for continued development of both the silicon and the software layers required to compete in AI compute infrastructure, where the trade-offs between custom silicon development costs and performance gains define commercial viability.

197 jobs
CO

Cohere

Cohere builds enterprise-focused foundational models designed for production deployment with emphasis on security, privacy, and operational trust. Founded in 2019 in Toronto, the company has raised nearly $1 billion and scaled to hundreds of employees worldwide. The technical focus spans semantic search, content generation, and customer experience applications - domains where model reliability and data governance are non-negotiable constraints for enterprise adoption. The company's architecture decisions reflect production realities over research novelty. Models are architected for deployment into regulated environments where data residency, access controls, and audit trails matter as much as accuracy metrics. This positioning addresses the gap between frontier model capabilities and enterprise operational requirements: latency SLAs, cost predictability, and compliance frameworks that prevent many organizations from operationalizing public AI APIs. Cohere Labs has published over 100 papers and built a research community of 4,500+ researchers, signaling ongoing investment in foundational work rather than pure application-layer focus. The team composition skews heavily toward researchers and engineers from academic backgrounds, which maps to the technical challenge space - building models that balance performance, safety constraints, and deployment flexibility across varied enterprise infrastructure.

106 jobs
PE

Perplexity

Perplexity operates an AI-powered answer engine processing over 150 million questions weekly across web, mobile, and enterprise platforms. Founded in 2022, the system combines real-time web search with multiple LLMs to deliver source-attributed answers. The architecture serves both consumer and enterprise workloads, with enterprise deployments requiring security guarantees for knowledge worker use cases including legal research partnerships with organizations like Latham & Watkins. The technical stack runs on AWS infrastructure with Terraform for provisioning, Python and Go for backend services, and PyTorch with DeepSpeed and FSDP for model training and inference. Data pipelines use dbt, SQL, Snowflake, and Databricks. Frontend implementations use React and TypeScript, with Docker containerization and Open Policy Agent for access control. This architecture must handle tail latency and throughput requirements for real-time search retrieval paired with LLM inference at consumer scale, while maintaining source credibility verification in the critical path. The engineering focus centers on information retrieval accuracy, model response quality, and citation reliability rather than advertising optimization. Production systems must balance inference cost against answer quality across multiple models, manage retrieval latency for real-time web indexing, and maintain reliability for both free-tier consumer traffic and enterprise SLA requirements. Pro tier monetization suggests capacity-based or model selection tiering rather than pure ad-based revenue.

76 jobs
AD

ada

Ada operates an omnichannel AI platform for enterprise customer service automation, processing interactions across chat, voice, email, and social channels. The platform has powered 5.5 billion interactions since 2016, with reported automation resolution rates at 83% of customer conversations. The architecture sits on AWS infrastructure using Python, JavaScript/TypeScript, and React for core platform components, with data handling through Redshift, MongoDB, Elasticsearch, and ClickHouse. Message processing runs through RabbitMQ, with Terraform managing infrastructure as code. The platform integrates with enterprise systems including Jira, Zendesk, and Salesforce. Founded in 2016 and backed by over $250M in funding from Accel, Bessemer, FirstMark, Spark, and Version One Ventures, Ada positions itself as providing both technology and transformation services to accelerate enterprise AI maturity. Customer deployments span fintech (Square), consumer goods (YETI), and SaaS (Monday.com) verticals. The company's agent management technology focuses on deployment optimization and performance improvement for AI-powered service automation at scale. The technical approach emphasizes omnichannel consistency and enterprise integration requirements. Scale metrics reference millions of hours saved through automation, though these figures represent customer-reported outcomes rather than independently verified benchmarks. The platform operates as a managed service with strategic consulting components aimed at enterprise adoption patterns and operational transformation beyond pure technical deployment.

21 jobs
CL

Clarifai

Clarifai operates a full-stack AI platform spanning data preparation, model training, deployment, and monitoring across computer vision, NLP, and audio domains. The platform serves over 400,000 users across 170+ countries, delivering billions of predictions with access to more than 1 million models. Founded in 2013 by Matthew Zeiler after winning top five placements at ImageNet 2013, the company has raised $100 million in funding from Menlo Ventures, Union Square Ventures, NVIDIA, Google Ventures, and Qualcomm. Customers include Amazon, Siemens, NVIDIA, Canva, Vimeo, and OpenTable. The inference architecture supports orchestrated compute across AWS, GCP, and Azure, with edge deployment through Local Runners for on-premises and edge scenarios. The platform integrates PyTorch, TensorFlow, JAX, Nvidia Triton, and ONNX, with reported performance of 544 tokens per second on GPT-OSS-120B. Technical focus areas include image classification, video analysis, multimodal processing, and MLOps workflows. The stack runs on Python and Golang, with Kubeflow for pipeline orchestration. The company positions itself as enterprise- and developer-focused, addressing the full AI lifecycle from unstructured data ingestion through production monitoring. Forrester recognized Clarifai as a leader in its Computer Vision report. The platform's scope spans model training, inference orchestration, and operational deployment across cloud and edge environments, serving use cases in e-commerce, manufacturing, semiconductors, creative software, media, and hospitality verticals.

2 jobs
GL

Gladia

Gladia operates speech-to-text APIs across two distinct workloads: real-time streaming at sub-300ms latency and asynchronous batch transcription, both supporting over 100 languages. The real-time path handles streaming audio with integrated speaker diarization, word-level timestamps, and sentiment analysis in the inference loop. The async path processes batch jobs with code-switching detection - single utterances spanning multiple languages - and comparable feature coverage. Over 150,000 users and 700 enterprise deployments (including VEED.IO, Circleback, Attention) generate production traffic against these endpoints. The core technical challenge is maintaining sub-300ms end-to-end latency on the streaming path while running diarization and alignment models alongside the primary ASR stack. Meeting this threshold at scale - across 100+ language models with varying acoustic characteristics - requires careful management of model load times, batching strategies, and inference queue depth. The async API trades latency tolerance for throughput optimization on longer-form audio, though specific cost-per-hour or throughput metrics are not disclosed. Code-switching introduces additional complexity: language detection, model routing, and boundary stitching must occur without degrading transcription accuracy or introducing alignment artifacts at switch points. Founded in 2022, the company raised $16 million Series A from Sequoia Capital, XAnge, and New Wave. Founders Jean-Louis Quéguiner and Jonathan Soto positioned the service as audio infrastructure for voice-first platforms rather than a narrow transcription tool. The engineering focus centers on reliability and operational predictability across multilingual inference workloads - handling acoustic variability, speaker overlap, background noise, and model version rollouts without service degradation. Production deployment at this user scale surfaces edge cases in language detection, diarization boundary errors, and latency tail behavior that define the system's actual robustness beyond benchmarked WER numbers.

2 jobs