VE

About

Vertiv operates critical digital infrastructure at global scale, delivering end-to-end systems that power and cool data centers, communication networks, and commercial facilities. The company's technical scope spans grid-to-chip power chains, thermal management, and intelligent monitoring - infrastructure that determines operational availability and performance characteristics for compute workloads from edge deployments to hyperscale cloud environments. With decades of domain expertise, Vertiv addresses the operational bottlenecks inherent in maintaining continuous uptime for mission-critical applications.

The product portfolio reflects infrastructure constraints across the stack: critical power solutions that maintain grid-to-chip continuity, adaptive cooling systems calibrated for varying thermal loads, and liquid cooling technologies designed specifically for high-density compute environments where traditional air cooling becomes a throughput limiter. Modular prefabricated data centers enable deployment at speed, while advanced battery energy storage systems provide backup power with different trade-offs than traditional UPS architectures. Intelligent monitoring and management systems surface operational visibility across these integrated components.

Vertiv serves customers ranging from hyperscale cloud providers managing efficiency at massive scale to local telecommunications networks with different reliability and cost constraints. The company positions its systems around operational excellence and business continuity - measurable outcomes in environments where infrastructure failures directly impact application availability. Digital services and expert support complement hardware deployment, addressing the operational complexity of maintaining critical infrastructure across geographically distributed sites. Under CEO Giordano Albertazzi, the company maintains a hardware-first approach while incorporating software monitoring and management capabilities, with stated emphasis on sustainability goals alongside traditional reliability metrics.

Open roles at Vertiv

Explore 114 open positions at Vertiv and find your next opportunity.

VE

Associate Engineer Technical Publications VIII

Vertiv

Mandaluyong City, Metro Manila, Philippines (On-site)

2w ago
VE

Sr. Product Manager – Enterprise AI Infrastructure Solutions

Vertiv

Westerville, Ohio, United States (On-site)

1mo ago
VE
VE

Project Manager - Enterprise Data

Vertiv

Westerville, Ohio, United States (On-site)

1mo ago
VE

Sales Data Science & AI Enablement Analyst

Vertiv

Westerville, Ohio, United States (On-site)

1mo ago
VE

Account Manager - AI Data Centres

Vertiv

Vienna, Vienna, Austria (On-site)

1mo ago
VE

Global Intelligent Automations Senior Analyst

Vertiv

Mandaluyong, Mandaluyong, Philippines (On-site)

1mo ago
VE

Inside Sales Senior Analyst - Korean Speaker

Vertiv

Mandaluyong, Metro Manila, Philippines (On-site)

1mo ago
VE

Sales Operations & Support Coordinator II

Vertiv

Mandaluyong, Mandaluyong, Philippines (On-site)

1mo ago
VE

Accounts Payable Senior Analyst

Vertiv

Mandaluyong City, Metro Manila, Philippines (On-site)

1mo ago
VE

Digital Marketing Specialist

Vertiv

Mandaluyong City, Metro Manila, Philippines (On-site)

1mo ago
VE

Service Support Coordinator I

Vertiv

Philippines (On-site)

1mo ago
VE

Inside Sales Senior Analyst - Thai Speaker

Vertiv

Mandaluyong City, Metro Manila, Philippines (On-site)

1mo ago
VE

Strategic Marketing Manager

Vertiv

Mandaluyong City, Metro Manila, Philippines (On-site)

2mo ago
VE

AI/ Gen AI Engineering Internship (Summer 2026)

Vertiv

Westerville, Ohio, United States (On-site)

2mo ago
VE

Manager - Services Operations

Vertiv

Mandaluyong City, Metro Manila, Philippines (On-site)

2mo ago
VE

UX Engineer – Data Center Infrastructure Optimization

Vertiv

Gent, East Flanders, Belgium (On-site)

2mo ago
VE

Application Development & Support Senior Analyst- .NET Core+React

Vertiv

Pune, Maharashtra, India (On-site)

2mo ago
VE

Similar companies

OP

OpenAI

OpenAI develops and deploys generative transformer models at scale, operating production systems that serve millions through ChatGPT, GPT model APIs, and the OpenAI API. The technical challenge spans the full stack: research engineering for novel model architectures, safety engineering for alignment and robustness, and production infrastructure for API deployment at scale. Teams work across research, product engineering, and operations, with work organized around both advancing model capabilities and maintaining reliability for deployed systems serving substantial user traffic. The core technical domains include model development for the GPT series, API infrastructure to support downstream applications, and safety research focused on making AGI beneficial. Engineering work involves trade-offs between model capability, inference cost, latency characteristics, and safety constraints. Research teams collaborate with product and engineering functions to move from experimental systems to production deployment, requiring expertise in distributed systems, model optimization, and operational complexity at scale. The company operates from San Francisco with international presence, positioning work as a global effort toward artificial general intelligence. Cross-functional teams include researchers, engineers, and operations staff working on problems ranging from foundational research to production reliability. The technical culture emphasizes rigorous safety practices alongside advancement of capabilities, with autonomy and ownership distributed across teams working on distinct components of the research-to-deployment pipeline.

741 jobs
CR

Crusoe

Crusoe designs, builds, and operates purpose-built data centers and cloud computing infrastructure powered by renewable energy sources including wind, solar, geothermal, and hydropower. Founded in 2018 by Chase Lochmiller and Cully Cavness, the company operates gigawatt-scale data center campuses and has raised billions in funding. The infrastructure supports AI workloads through partnerships with NVIDIA and AMD, offering GPU-backed cloud services focused on the trade-off between computational scale and energy sustainability. The company's technical stack spans cluster orchestration (Kubernetes, Slurm), infrastructure automation (Terraform, Ansible, Puppet), and distributed storage systems (Ceph, GlusterFS, OpenEBS). Development work involves Python, Go, Java, and C, with infrastructure built on Linux, NVMe storage, and RDMA networking to support high-throughput AI training and inference workloads. The vertical integration approach extends from data center construction through hardware partnerships to cloud platform operations. Crusoe evolved from early operations converting wasted natural gas from oil fields into computing power for bitcoin mining. The current focus is AI infrastructure delivery, where the energy-first approach addresses the operational constraint of power availability at scale - a bottleneck increasingly relevant as model size and inference volume grow. The cloud platform enables organizations to deploy AI solutions with access to GPU capacity backed by renewable energy sources, though specific performance characteristics, availability zones, and pricing models are not publicly detailed in standard materials.

436 jobs
XA

xAI

xAI, founded by Elon Musk in 2023, builds AI systems designed to advance scientific discovery and gain deeper understanding of the universe. The company operates across offices in Palo Alto, Seattle, San Francisco, Tennessee, and London, with a technical infrastructure spanning Python, Rust, JAX, and Kubernetes for model development and deployment, alongside TypeScript, React, and WebAssembly for interface layers. The engineering stack emphasizes RoCEv2 and InfiniBand for networking in distributed training and inference workloads. The company's flagship product is Grok, a conversational AI modeled after the Hitchhiker's Guide to the Galaxy, providing real-time information access integrated with the X platform. Development follows first-principles reasoning with rapid iteration cycles, focusing on system bottlenecks in latency, throughput, and reliability rather than incremental feature additions. The technical approach centers on large language model architectures optimized for both scientific reasoning tasks and production conversational inference at scale. xAI's engineering culture prioritizes operational complexity trade-offs inherent in deploying large models - managing tail latency in multi-tenant inference serving, balancing cost against throughput requirements, and addressing failure modes in real-time information retrieval systems. The team composition spans researchers and engineers working on problems at the intersection of AI capabilities research and production system reliability, with infrastructure supporting both research experimentation and user-facing deployment.

360 jobs
GR

Graphcore

Graphcore, a British semiconductor company and wholly owned subsidiary of SoftBank Group, develops specialized AI compute hardware centered on its Intelligence Processing Unit (IPU). The IPU represents a processor architecture specifically designed for machine intelligence workloads rather than general-purpose computing. The company built a complete AI compute stack spanning silicon design through datacenter infrastructure, including the Poplar software framework that sits atop the hardware. Graphcore brought the first Wafer-on-Wafer AI processor to market, a packaging approach that addresses the bandwidth and latency constraints inherent in traditional chip-to-chip interconnects for AI workloads. The technical scope encompasses semiconductor engineering, processor design, and AI-specific optimizations across both hardware and software layers. The engineering team works on silicon design, wafer-scale integration technology, and the development of tools for AI model optimization. The software stack includes developer tools designed to extract performance from the IPU architecture, with ongoing work to optimize popular AI models for the platform. This systems-level approach attempts to address the throughput and efficiency bottlenecks that emerge when running large-scale machine learning workloads on conventional processor architectures. Under CEO Nigel Toon's leadership, Graphcore operates with global presence and maintains teams of semiconductor, software, and AI specialists. The company's technology stack includes standard datacenter interfaces (PCIe, DDR, Ethernet) alongside proprietary elements like the IPU and Poplar software. The subsidiary structure under SoftBank provides backing for continued development of both the silicon and the software layers required to compete in AI compute infrastructure, where the trade-offs between custom silicon development costs and performance gains define commercial viability.

197 jobs
CE

Cerebras

Cerebras Systems designs and manufactures wafer-scale AI chips that consolidate the compute capacity of dozens of GPUs into a single device. Founded in 2015, the company's core architecture is 56 times larger than standard GPUs, addressing the operational complexity of distributed training and inference by offering programmability equivalent to a single-device system while delivering multi-GPU performance. This approach collapses the network bottlenecks and synchronization overhead inherent in GPU clusters, enabling users to run large-scale ML workloads without orchestrating hundreds of accelerators. The company's technical stack spans the full systems hierarchy: custom silicon (wafer-scale chip architecture), compiler infrastructure (MLIR, LLVM IR, and their proprietary CSL language), runtime orchestration (Kubernetes), and deployment tooling. Engineering work touches computer architecture, deep learning kernels, systems software for hardware programmability, and inference serving at scale. Recent partnerships include work with OpenAI on inference deployment, alongside engagements with national laboratories, global enterprises, and healthcare systems requiring high-throughput ML serving. Cerebras positions its hardware for both training and inference workloads, with claimed industry-leading speeds stemming from on-chip interconnect bandwidth and elimination of multi-chip communication latency. The architecture trades traditional data center modularity for integrated performance - relevant for workloads bottlenecked by cross-device synchronization or where cost-per-inference and tail latency matter more than incremental horizontal scaling. Development infrastructure includes C++, Python, Go, and Zig across the stack, with CI/CD through GitHub Actions and Jenkins.

135 jobs
BA

Baseten

Baseten builds AI infrastructure for production deployment and scaling of models, with work spanning kernel-level optimization for inference performance through developer tooling. The platform ships daily, measuring success by real-world impact of AI products running on it rather than vanity metrics. Engineers embed directly with customers to surface operational bottlenecks, then optimize obsessively - work ranges from TensorRT-LLM and CUDA kernel tuning to building developer tools that reduce deployment friction. The stack centers on inference at scale: TensorRT-LLM and PyTorch for model execution, NVIDIA Triton Inference Server for serving, Kubernetes (EKS) with Karpenter for autoscaling, and Knative for event-driven workloads on AWS EC2. Infrastructure decisions prioritize shipping velocity over process - small teams with real ownership iterate rapidly on production reliability, latency (including tail behavior), and cost efficiency. Docker containerization and PostgreSQL round out core operational dependencies. The team is internationally distributed, composed of engineers and designers who take craft seriously without performative posturing. Customer-embedded engineering informs both platform architecture and developer experience tradeoffs, creating tight feedback loops between deployment reality and infrastructure evolution. From founding, the approach has centered on hands-on problem solving and rapid iteration rather than abstraction layers that delay production learning.

69 jobs