1. Home
  2. AI Companies
  3. Applied Intuition
AI

About

Applied Intuition develops software infrastructure for autonomous systems across automotive, defense, trucking, construction, mining, and agriculture sectors. Founded in 2017 and headquartered in Mountain View, the company provides three core products: a Vehicle OS, a Self-Driving System, and a development toolchain for building, validating, and deploying AI-driven vehicles. The platform addresses the full lifecycle bottleneck - development velocity, validation coverage, and production deployment - that typically constrains time-to-market for intelligent machines.

The company operates at significant scale: 18 of the top 20 global automakers use their solutions, and they maintain contracts across major U.S. Department of Defense programs. Applied Intuition recently completed a Series F funding round at a $15 billion valuation. Their technical focus spans vehicle operating systems, autonomous driving stacks, and the simulation and validation infrastructure required to achieve safety-critical reliability standards. The toolchain approach suggests they're tackling the operational complexity of managing sensor data pipelines, perception model evaluation, and scenario coverage gaps that plague autonomous system development.

With offices across 12 locations spanning Mountain View, Washington D.C., multiple U.S. defense corridor cities, and international presence in London, Stuttgart, Munich, Stockholm, Bangalore, Seoul, and Tokyo, the company maintains proximity to both OEM customers and defense installations. Led by CEO Qasar Younis, Applied Intuition positions itself at the intersection of commercial autonomy and defense applications - two domains with divergent failure mode tolerances and deployment constraints, but shared requirements for robust perception, planning, and real-time decision-making infrastructure.

Open roles at Applied Intuition

Explore 199 open positions at Applied Intuition and find your next opportunity.

AI

Program Manager - People Operations

Applied Intuition

Sunnyvale, California, United States (On-site)

$115K – $183K Yearly3d ago
AI

Senior Sensor Rendering Software Engineer

Applied Intuition

Sunnyvale, California, United States (On-site)

$150K – $250K Yearly3d ago
AI

Model Based Systems Engineer

Applied Intuition

Sunnyvale, California, United States (On-site)

$118K – $230K Yearly3d ago
AI

Embedded AI Engineer – Android Automotive (On-Device Intelligence)

Applied Intuition

Sunnyvale, California, United States (On-site)

$150K – $250K Yearly5d ago
AI

People Operations Generalist

Applied Intuition

Sunnyvale, California, United States (On-site)

$95K – $121K Yearly5d ago
AI

Product Manager - Vehicle OS

Applied Intuition

Sunnyvale, California, United States (On-site)

$125K – $252K Yearly5d ago
AI

Chief Engineer - Trucking

Applied Intuition

Tokyo Prefecture, Japan (On-site)

5d ago
AI

Systems Engineer

Applied Intuition

Stuttgart, Baden-Württemberg, Germany (On-site)

7d ago
AI

Information Systems Security Engineer (ISSE)

Applied Intuition

Fort Walton Beach, Florida, United States (On-site)

$150K – $200K Yearly7d ago
AI

Software Engineer

Applied Intuition

Stuttgart, Baden-Württemberg, Germany (On-site)

7d ago
AI

Software Engineer / DevOps

Applied Intuition

Stuttgart, Baden-Württemberg, Germany (On-site)

7d ago
AI

C++ Software Engineer - Mission Systems

Applied Intuition

Washington, District of Columbia, United States (On-site)

$150K – $200K Yearly2w ago
AI

Engineering Manager - Data Platform and Infrastructure

Applied Intuition

Sunnyvale, California, United States (On-site)

$65K – $400K Yearly2w ago
AI

Cross-Functional Producer, Design

Applied Intuition

Sunnyvale, California, United States (On-site)

$140K – $200K Yearly2w ago
AI

Software Engineer - Insights Platform

Applied Intuition

Sunnyvale, California, United States (On-site)

$125K – $222K Yearly2w ago
AI

Procurement Manager

Applied Intuition

Sunnyvale, California, United States (On-site)

$180K – $230K Yearly2w ago
AI

Application Engineer - Technical Customer Lead

Applied Intuition

Sunnyvale, California, United States (On-site)

$105K – $165K Yearly2w ago
AI

IT Operations Engineer

Applied Intuition

Fort Walton Beach, Florida, United States (On-site)

$85K – $110K Yearly3w ago
AI

Fleet Reliability Engineer

Applied Intuition

Sunnyvale, California, United States (On-site)

$110K – $150K Yearly3w ago
AI

Missions Software Engineer (Client Facing)

Applied Intuition

Washington, District of Columbia, United States (On-site)

$145K – $190K Yearly3w ago

Similar companies

CO

CoreWeave

CoreWeave operates specialized cloud infrastructure purpose-built for AI workloads, with data centers across the US and Europe delivering GPU compute for large language model training and inference at scale. Founded in 2017 as Atlantic Crypto, a cryptocurrency mining operation, the company executed a complete strategic pivot to AI infrastructure - rebuilding from first principles rather than retrofitting existing cloud architectures. The platform runs on Kubernetes-based orchestration designed specifically for AI workloads, coupled with custom storage solutions engineered to handle the I/O patterns and throughput requirements of model training and deployment pipelines. The technical stack centers on NVIDIA GPUs with orchestration built in Go, Python, and C++ on Linux, instrumented with Prometheus, Grafana, and OpenTelemetry for observability across distributed systems. Rather than adapting general-purpose cloud tooling, CoreWeave's infrastructure treats GPU compute density, inter-node bandwidth, and storage parallelism as primary design constraints. This systems-level focus reflects a team drawn from infrastructure engineering and quantitative trading backgrounds - disciplines where latency budgets and resource utilization directly determine feasibility. CoreWeave serves AI labs, enterprises, and startups requiring production-scale inference and training capacity. The company's recognition on the TIME100 most influential companies list signals market adoption of specialized AI infrastructure as distinct from traditional cloud providers. For engineers, the environment offers direct exposure to the operational realities of running GPU clusters at scale: thermal management, network topology for distributed training, failure modes in multi-tenant GPU environments, and the cost-performance trade-offs inherent in serving latency-sensitive inference workloads alongside batch training jobs.

436 jobs
GR

Graphcore

Graphcore, a British semiconductor company and wholly owned subsidiary of SoftBank Group, develops specialized AI compute hardware centered on its Intelligence Processing Unit (IPU). The IPU represents a processor architecture specifically designed for machine intelligence workloads rather than general-purpose computing. The company built a complete AI compute stack spanning silicon design through datacenter infrastructure, including the Poplar software framework that sits atop the hardware. Graphcore brought the first Wafer-on-Wafer AI processor to market, a packaging approach that addresses the bandwidth and latency constraints inherent in traditional chip-to-chip interconnects for AI workloads. The technical scope encompasses semiconductor engineering, processor design, and AI-specific optimizations across both hardware and software layers. The engineering team works on silicon design, wafer-scale integration technology, and the development of tools for AI model optimization. The software stack includes developer tools designed to extract performance from the IPU architecture, with ongoing work to optimize popular AI models for the platform. This systems-level approach attempts to address the throughput and efficiency bottlenecks that emerge when running large-scale machine learning workloads on conventional processor architectures. Under CEO Nigel Toon's leadership, Graphcore operates with global presence and maintains teams of semiconductor, software, and AI specialists. The company's technology stack includes standard datacenter interfaces (PCIe, DDR, Ethernet) alongside proprietary elements like the IPU and Poplar software. The subsidiary structure under SoftBank provides backing for continued development of both the silicon and the software layers required to compete in AI compute infrastructure, where the trade-offs between custom silicon development costs and performance gains define commercial viability.

197 jobs
CO

Cohere

Cohere builds enterprise-focused foundational models designed for production deployment with emphasis on security, privacy, and operational trust. Founded in 2019 in Toronto, the company has raised nearly $1 billion and scaled to hundreds of employees worldwide. The technical focus spans semantic search, content generation, and customer experience applications - domains where model reliability and data governance are non-negotiable constraints for enterprise adoption. The company's architecture decisions reflect production realities over research novelty. Models are architected for deployment into regulated environments where data residency, access controls, and audit trails matter as much as accuracy metrics. This positioning addresses the gap between frontier model capabilities and enterprise operational requirements: latency SLAs, cost predictability, and compliance frameworks that prevent many organizations from operationalizing public AI APIs. Cohere Labs has published over 100 papers and built a research community of 4,500+ researchers, signaling ongoing investment in foundational work rather than pure application-layer focus. The team composition skews heavily toward researchers and engineers from academic backgrounds, which maps to the technical challenge space - building models that balance performance, safety constraints, and deployment flexibility across varied enterprise infrastructure.

106 jobs
BA

Baseten

Baseten builds AI infrastructure for production deployment and scaling of models, with work spanning kernel-level optimization for inference performance through developer tooling. The platform ships daily, measuring success by real-world impact of AI products running on it rather than vanity metrics. Engineers embed directly with customers to surface operational bottlenecks, then optimize obsessively - work ranges from TensorRT-LLM and CUDA kernel tuning to building developer tools that reduce deployment friction. The stack centers on inference at scale: TensorRT-LLM and PyTorch for model execution, NVIDIA Triton Inference Server for serving, Kubernetes (EKS) with Karpenter for autoscaling, and Knative for event-driven workloads on AWS EC2. Infrastructure decisions prioritize shipping velocity over process - small teams with real ownership iterate rapidly on production reliability, latency (including tail behavior), and cost efficiency. Docker containerization and PostgreSQL round out core operational dependencies. The team is internationally distributed, composed of engineers and designers who take craft seriously without performative posturing. Customer-embedded engineering informs both platform architecture and developer experience tradeoffs, creating tight feedback loops between deployment reality and infrastructure evolution. From founding, the approach has centered on hands-on problem solving and rapid iteration rather than abstraction layers that delay production learning.

69 jobs