1. Home
  2. AI Companies
  3. LangChain

About

LangChain operates an engineering platform and open source frameworks for building, testing, and deploying AI agents. The core offering comprises LangChain and LangGraph - open source frameworks providing pre-built architectures and access to 1,000+ integrations - alongside LangSmith, a commercial platform for observability, evaluation, and deployment of LLM systems. The frameworks see over 90 million combined downloads per month and are used by millions of developers worldwide, with named deployments at Replit, Clay, Cloudflare, Harvey, Rippling, Vanta, Workday, LinkedIn, and Coinbase.

The technical stack addresses the production bottlenecks of agent engineering: reliability through comprehensive observability, evaluation tooling to surface failure modes before deployment, and deployment infrastructure to move from prototype to production. LangSmith's platform provides the operational layer for teams moving LLM systems into production environments, while the open source frameworks prioritize development velocity through pre-built components and extensive integration coverage. The architecture allows granular control over agent behavior while reducing the complexity of integrating external services and managing LLM system reliability at scale.

LangChain serves both major enterprises and startups building AI agents, with technical domains spanning agent engineering, LLM systems, observability, evaluation, and developer tooling. The company is led by CEO Harrison Chase and maintains a US headquarters, with a worldwide developer base. The dual model of open source frameworks and commercial platform reflects a focus on production-readiness and operational support for teams deploying agents at scale.

Open roles at LangChain

Explore 78 open positions at LangChain and find your next opportunity.

LA

Solutions Architect (Austin)

LangChain

Austin, Texas, United States (On-site)

$170K – $190K Yearly1mo ago
LA

Lifecycle Marketing Manager

LangChain

San Francisco, California, United States (On-site)

$160K – $180K Yearly1mo ago
LA

Deployed Engineer (Boston)

LangChain

Boston, Massachusetts, United States (On-site)

$150K – $250K Yearly1mo ago
LA

Solutions Architect (Amsterdam)

LangChain

Netherlands (Remote)

1mo ago
LA

Solutions Architect (NYC)

LangChain

New York, United States (On-site)

$170K – $190K Yearly1mo ago
LA

Marketing Operations Manager

LangChain

San Francisco, California, United States (On-site)

$160K – $240K Yearly1mo ago
LA

Deployed Engineer (Central)

LangChain

Worldwide (Remote)

$150K – $250K Yearly1mo ago
LA

Deployed Engineer (Stockholm)

LangChain

Stockholm, Sweden (On-site)

1mo ago
LA

Commercial Account Executive (UK)

LangChain

London, England, United Kingdom (On-site)

$175K – $220K Yearly1mo ago
LA

Deployed Engineer (Sydney)

LangChain

Sydney, New South Wales, Australia (On-site)

1mo ago
LA

Account Executive (Stockholm)

LangChain

Stockholm, Sweden (On-site)

$225K – $350K Yearly1mo ago
LA

Deployed Engineer (Amsterdam)

LangChain

North Holland, Netherlands (Remote)

1mo ago
LA

Sales Development Representative

LangChain

New York, United States (On-site)

$110K – $128K Yearly1mo ago
LA

Sales Development Representative

LangChain

San Francisco, California, United States (On-site)

$110K – $128K Yearly1mo ago
LA

Brand Lead

LangChain

San Francisco, California, United States (On-site)

$160K – $200K Yearly1mo ago
LA
LA

Senior Backend Engineer, LangSmith Deployments

LangChain

San Francisco, California, United States (On-site)

$175K – $225K Yearly2mo ago
LA

Account Executive (UK)

LangChain

London, England, United Kingdom (On-site)

$225K – $350K Yearly2mo ago
LA

Account Executive (Australia)

LangChain

London, England, United Kingdom (On-site)

$225K – $350K Yearly2mo ago
LA

Account Executive ( Germany)

LangChain

London, England, United Kingdom (On-site)

$225K – $350K Yearly2mo ago

Similar companies

OP

OpenAI

OpenAI develops and deploys generative transformer models at scale, operating production systems that serve millions through ChatGPT, GPT model APIs, and the OpenAI API. The technical challenge spans the full stack: research engineering for novel model architectures, safety engineering for alignment and robustness, and production infrastructure for API deployment at scale. Teams work across research, product engineering, and operations, with work organized around both advancing model capabilities and maintaining reliability for deployed systems serving substantial user traffic. The core technical domains include model development for the GPT series, API infrastructure to support downstream applications, and safety research focused on making AGI beneficial. Engineering work involves trade-offs between model capability, inference cost, latency characteristics, and safety constraints. Research teams collaborate with product and engineering functions to move from experimental systems to production deployment, requiring expertise in distributed systems, model optimization, and operational complexity at scale. The company operates from San Francisco with international presence, positioning work as a global effort toward artificial general intelligence. Cross-functional teams include researchers, engineers, and operations staff working on problems ranging from foundational research to production reliability. The technical culture emphasizes rigorous safety practices alongside advancement of capabilities, with autonomy and ownership distributed across teams working on distinct components of the research-to-deployment pipeline.

741 jobs
AN

Anthropic

Anthropic is an AI safety and research company founded in 2021 by seven former OpenAI employees, now operating as a Public Benefit Corporation with approximately 3,000 employees. The company develops the Claude family of large language models and associated AI assistant implementations, with a technical mandate centered on reliability, interpretability, and steerability. Under CEO Dario Amodei, Anthropic has reached a reported valuation of $183 billion while maintaining an explicit focus on AI systems aligned with human values and long-term societal benefit. The core technical work spans AI safety research, interpretable AI systems, and steerable large language models. Claude, Anthropic's primary product line, is positioned as engineered for safety, accuracy, and security in production deployments. The company's research agenda prioritizes understanding failure modes and developing evaluation frameworks that account for reliability constraints in real-world inference scenarios, rather than pursuing capability benchmarks in isolation. Anthropic's operational model combines frontier research with practical deployment considerations - balancing the latency-throughput-cost trade-offs inherent in large-scale language model serving while maintaining interpretability as a first-class constraint. The company approaches AI assistant development through the lens of alignment research, treating production systems as both products and testbeds for safety techniques. This dual mandate shapes technical priorities: understanding model behavior under distribution shift, quantifying uncertainty in high-stakes applications, and building systems where performance degradation is predictable and bounded.

683 jobs
MA

Mistral AI

Mistral AI is a French AI company founded in April 2023 by Arthur Mensch, Guillaume Lample, and Timothée Lacroix - researchers with prior affiliations at Google DeepMind and Meta and academic roots at École Polytechnique. The company develops and releases open-weight, state-of-the-art generative AI models positioned as alternatives to proprietary solutions, with a focus on democratizing access to frontier AI technology. Their core approach centers on open, transparent model development that enables developers, enterprises, and institutions to build applications while maintaining control over their data and deployments. The company's primary product line consists of open-weight generative AI models released publicly, which Mistral claims rival proprietary solutions in capability. Their technical domains span generative AI model training, with particular emphasis on open-weight architectures, AI transparency, and bias mitigation. The founding mission explicitly opposes what the company characterizes as emerging opacity and centralization in AI systems, positioning their open-weight approach as a structural alternative to closed, proprietary models. Mistral AI's operational model emphasizes community-backed development and targets a broad user base spanning individual developers, enterprise deployments, and institutional applications across global markets. The company's cultural positioning centers on maintaining user control over inference infrastructure and data pipelines, combating censorship in model outputs, and providing an alternative to concentrated control of frontier AI capabilities. While specific scale metrics around model performance, deployment volumes, or operational characteristics are not publicly detailed, the company claims to have achieved state-of-the-art results in their released model family.

212 jobs
CO

Cohere

Cohere builds enterprise-focused foundational models designed for production deployment with emphasis on security, privacy, and operational trust. Founded in 2019 in Toronto, the company has raised nearly $1 billion and scaled to hundreds of employees worldwide. The technical focus spans semantic search, content generation, and customer experience applications - domains where model reliability and data governance are non-negotiable constraints for enterprise adoption. The company's architecture decisions reflect production realities over research novelty. Models are architected for deployment into regulated environments where data residency, access controls, and audit trails matter as much as accuracy metrics. This positioning addresses the gap between frontier model capabilities and enterprise operational requirements: latency SLAs, cost predictability, and compliance frameworks that prevent many organizations from operationalizing public AI APIs. Cohere Labs has published over 100 papers and built a research community of 4,500+ researchers, signaling ongoing investment in foundational work rather than pure application-layer focus. The team composition skews heavily toward researchers and engineers from academic backgrounds, which maps to the technical challenge space - building models that balance performance, safety constraints, and deployment flexibility across varied enterprise infrastructure.

106 jobs
BA

Baseten

Baseten builds AI infrastructure for production deployment and scaling of models, with work spanning kernel-level optimization for inference performance through developer tooling. The platform ships daily, measuring success by real-world impact of AI products running on it rather than vanity metrics. Engineers embed directly with customers to surface operational bottlenecks, then optimize obsessively - work ranges from TensorRT-LLM and CUDA kernel tuning to building developer tools that reduce deployment friction. The stack centers on inference at scale: TensorRT-LLM and PyTorch for model execution, NVIDIA Triton Inference Server for serving, Kubernetes (EKS) with Karpenter for autoscaling, and Knative for event-driven workloads on AWS EC2. Infrastructure decisions prioritize shipping velocity over process - small teams with real ownership iterate rapidly on production reliability, latency (including tail behavior), and cost efficiency. Docker containerization and PostgreSQL round out core operational dependencies. The team is internationally distributed, composed of engineers and designers who take craft seriously without performative posturing. Customer-embedded engineering informs both platform architecture and developer experience tradeoffs, creating tight feedback loops between deployment reality and infrastructure evolution. From founding, the approach has centered on hands-on problem solving and rapid iteration rather than abstraction layers that delay production learning.

69 jobs