About

Crusoe designs, builds, and operates purpose-built data centers and cloud computing infrastructure powered by renewable energy sources including wind, solar, geothermal, and hydropower. Founded in 2018 by Chase Lochmiller and Cully Cavness, the company operates gigawatt-scale data center campuses and has raised billions in funding. The infrastructure supports AI workloads through partnerships with NVIDIA and AMD, offering GPU-backed cloud services focused on the trade-off between computational scale and energy sustainability.

The company's technical stack spans cluster orchestration (Kubernetes, Slurm), infrastructure automation (Terraform, Ansible, Puppet), and distributed storage systems (Ceph, GlusterFS, OpenEBS). Development work involves Python, Go, Java, and C, with infrastructure built on Linux, NVMe storage, and RDMA networking to support high-throughput AI training and inference workloads. The vertical integration approach extends from data center construction through hardware partnerships to cloud platform operations.

Crusoe evolved from early operations converting wasted natural gas from oil fields into computing power for bitcoin mining. The current focus is AI infrastructure delivery, where the energy-first approach addresses the operational constraint of power availability at scale - a bottleneck increasingly relevant as model size and inference volume grow. The cloud platform enables organizations to deploy AI solutions with access to GPU capacity backed by renewable energy sources, though specific performance characteristics, availability zones, and pricing models are not publicly detailed in standard materials.

Open roles at Crusoe

Explore 284 open positions at Crusoe and find your next opportunity.

CR

Software Engineer II, SDN Networking

Crusoe

San Francisco, California, United States (On-site)

$131K – $154K Yearly6d ago
CR

Data Center Systems Engineer, R&D

Crusoe

Denver, Colorado, United States (On-site)

$188K – $235K Yearly6d ago
CR

Network Architect

Crusoe

San Francisco, California, United States (On-site)

$195K – $225K Yearly6d ago
CR

Senior Director, Product Marketing

Crusoe

San Francisco, California, United States (On-site)

$272K – $321K Yearly6d ago
CR

Manager, Revenue Operations

Crusoe

Bellevue, Washington, United States (On-site)

$136K – $165K Yearly6d ago
CR

Technology Scout, R&D

Crusoe

Denver, Colorado, United States (On-site)

$161K – $195K Yearly6d ago
CR

Senior Manager, Production

Crusoe

Brighton, Colorado, United States (On-site)

$164K – $193K Yearly6d ago
CR

Commissioning Engineer, Mechanical

Crusoe

Amarillo, Texas, United States (On-site)

$155K – $165K Yearly6d ago
CR

Field Acceptance Testing Technician

Crusoe

Brighton, Colorado, United States (On-site)

$33 – $36 Hourly6d ago
CR

Staff Network Deployment Engineer, Lab

Crusoe

San Francisco, California, United States (On-site)

$193K – $234K Yearly6d ago
CR

Senior SDN Development Engineer (Management Plane)

Crusoe

San Francisco, California, United States (On-site)

$152K – $184K Yearly6d ago
CR

Senior Director, Cloud Engineering

Crusoe

San Francisco, California, United States (On-site)

$301K – $355K Yearly6d ago
CR

Project Executive

Crusoe

Childress, Texas, United States (On-site)

$238K – $280K Yearly6d ago
CR

Enterprise IT Architect

Crusoe

San Francisco, California, United States (On-site)

$195K – $225K Yearly6d ago
CR

Legal Counsel, Power & Energy

Crusoe

San Francisco, California, United States (On-site)

$218K – $273K Yearly6d ago
CR

Senior Enterprise Technology Administrator, Finance Systems

Crusoe

San Francisco, California, United States (On-site)

$140K – $165K Yearly6d ago
CR

Senior Manager, R&D

Crusoe

Denver, Colorado, United States (Hybrid)

$132K – $165K Yearly6d ago
CR

Manager, Revenue Operations

Crusoe

Denver, Colorado, United States (On-site)

$111K – $135K Yearly6d ago
CR

Product Quality Manager - Mission Critical

Crusoe

Brighton, Colorado, United States (On-site)

$99.8K – $114K Yearly6d ago

Similar companies

GR

Graphcore

Graphcore, a British semiconductor company and wholly owned subsidiary of SoftBank Group, develops specialized AI compute hardware centered on its Intelligence Processing Unit (IPU). The IPU represents a processor architecture specifically designed for machine intelligence workloads rather than general-purpose computing. The company built a complete AI compute stack spanning silicon design through datacenter infrastructure, including the Poplar software framework that sits atop the hardware. Graphcore brought the first Wafer-on-Wafer AI processor to market, a packaging approach that addresses the bandwidth and latency constraints inherent in traditional chip-to-chip interconnects for AI workloads. The technical scope encompasses semiconductor engineering, processor design, and AI-specific optimizations across both hardware and software layers. The engineering team works on silicon design, wafer-scale integration technology, and the development of tools for AI model optimization. The software stack includes developer tools designed to extract performance from the IPU architecture, with ongoing work to optimize popular AI models for the platform. This systems-level approach attempts to address the throughput and efficiency bottlenecks that emerge when running large-scale machine learning workloads on conventional processor architectures. Under CEO Nigel Toon's leadership, Graphcore operates with global presence and maintains teams of semiconductor, software, and AI specialists. The company's technology stack includes standard datacenter interfaces (PCIe, DDR, Ethernet) alongside proprietary elements like the IPU and Poplar software. The subsidiary structure under SoftBank provides backing for continued development of both the silicon and the software layers required to compete in AI compute infrastructure, where the trade-offs between custom silicon development costs and performance gains define commercial viability.

197 jobs
VE

Vertiv

Vertiv operates critical digital infrastructure at global scale, delivering end-to-end systems that power and cool data centers, communication networks, and commercial facilities. The company's technical scope spans grid-to-chip power chains, thermal management, and intelligent monitoring - infrastructure that determines operational availability and performance characteristics for compute workloads from edge deployments to hyperscale cloud environments. With decades of domain expertise, Vertiv addresses the operational bottlenecks inherent in maintaining continuous uptime for mission-critical applications. The product portfolio reflects infrastructure constraints across the stack: critical power solutions that maintain grid-to-chip continuity, adaptive cooling systems calibrated for varying thermal loads, and liquid cooling technologies designed specifically for high-density compute environments where traditional air cooling becomes a throughput limiter. Modular prefabricated data centers enable deployment at speed, while advanced battery energy storage systems provide backup power with different trade-offs than traditional UPS architectures. Intelligent monitoring and management systems surface operational visibility across these integrated components. Vertiv serves customers ranging from hyperscale cloud providers managing efficiency at massive scale to local telecommunications networks with different reliability and cost constraints. The company positions its systems around operational excellence and business continuity - measurable outcomes in environments where infrastructure failures directly impact application availability. Digital services and expert support complement hardware deployment, addressing the operational complexity of maintaining critical infrastructure across geographically distributed sites. Under CEO Giordano Albertazzi, the company maintains a hardware-first approach while incorporating software monitoring and management capabilities, with stated emphasis on sustainability goals alongside traditional reliability metrics.

140 jobs
CE

Cerebras

Cerebras Systems designs and manufactures wafer-scale AI chips that consolidate the compute capacity of dozens of GPUs into a single device. Founded in 2015, the company's core architecture is 56 times larger than standard GPUs, addressing the operational complexity of distributed training and inference by offering programmability equivalent to a single-device system while delivering multi-GPU performance. This approach collapses the network bottlenecks and synchronization overhead inherent in GPU clusters, enabling users to run large-scale ML workloads without orchestrating hundreds of accelerators. The company's technical stack spans the full systems hierarchy: custom silicon (wafer-scale chip architecture), compiler infrastructure (MLIR, LLVM IR, and their proprietary CSL language), runtime orchestration (Kubernetes), and deployment tooling. Engineering work touches computer architecture, deep learning kernels, systems software for hardware programmability, and inference serving at scale. Recent partnerships include work with OpenAI on inference deployment, alongside engagements with national laboratories, global enterprises, and healthcare systems requiring high-throughput ML serving. Cerebras positions its hardware for both training and inference workloads, with claimed industry-leading speeds stemming from on-chip interconnect bandwidth and elimination of multi-chip communication latency. The architecture trades traditional data center modularity for integrated performance - relevant for workloads bottlenecked by cross-device synchronization or where cost-per-inference and tail latency matter more than incremental horizontal scaling. Development infrastructure includes C++, Python, Go, and Zig across the stack, with CI/CD through GitHub Actions and Jenkins.

135 jobs
PI

Pinecone

Pinecone operates a fully managed vector database service designed for production AI applications requiring storage and retrieval of high-dimensional embeddings. The system handles vector search at scale across recommendation systems, semantic search, and related ML-backed services. Founded by Edo Liberty, formerly a research director at AWS with prior experience building custom vector search systems at large scale, the company is credited with establishing the vector database category as a distinct infrastructure layer. The technical stack centers on systems languages - Rust, Go, C++, and Python - with RocksDB as the storage engine and Kubernetes orchestration across AWS, GCP, and Azure. This architecture targets the operational complexity of managing embedding indices, query latency, and throughput at production scale, abstracting infrastructure decisions from engineering teams deploying AI features. The platform serves thousands of companies, positioning itself on ease of deployment and reduced time-to-production for vector-backed applications. The founding principle emphasizes accessibility for engineering teams of varying sizes, evolving the managed service model to minimize operational overhead in running vector workloads. Core focus areas include retrieval performance, reliability under production load, and cost-efficiency trade-offs inherent to high-dimensional search systems.

9 jobs