- Home
- AI Companies
- Prolific
Prolific
Open roles at Prolific
Explore 118 open positions at Prolific and find your next opportunity.
AI Trainer - Advanced Mandarin Fluency
Prolific
Worldwide (Remote)
AI Trainer - Advanced Japanese Fluency (PST)
Prolific
Worldwide (Remote)
AI Trainer - Advanced Arabic Fluency (EU)
Prolific
Europe (Remote)
AI Trainer - Advanced Java Developers (US & Canada)
Prolific
United States + 1 more (Remote)
AI Trainer - Advanced Spanish Fluency (Spain)
Prolific
Spain (Remote)
AI Trainer - Advanced Mandarin Fluency (GER)
Prolific
Worldwide (Remote)
AI Trainer - Advanced Spanish Fluency (Mexico)
Prolific
Mexico (Remote)
AI Trainer - Advanced Mandarin Fluency (CAN)
Prolific
Canada (Remote)
AI Trainer - Advanced SQL Developers (Remote)
Prolific
Worldwide (Remote)
AI Trainer - Clinicians (AUS)
Prolific
Australia (Remote)
AI Trainer - Advanced SQL Developers
Prolific
Worldwide (Remote)
AI Trainer - Advanced JavaScript Developers (US & Canada)
Prolific
United States + 1 more (Remote)
AI Trainer - Clinicians (UK)
Prolific
United Kingdom (Remote)
AI Trainer - Advanced JavaScript Developers
Prolific
Worldwide (Remote)
AI Trainer - Advanced Japanese Fluency (FR)
Prolific
Worldwide (Remote)
AI Trainer - Concierge (Remote)
Prolific
Worldwide (Remote)
AI Trainer - Advanced Korean Fluency (EST)
Prolific
Worldwide (Remote)
Similar companies
Graphcore
Graphcore, a British semiconductor company and wholly owned subsidiary of SoftBank Group, develops specialized AI compute hardware centered on its Intelligence Processing Unit (IPU). The IPU represents a processor architecture specifically designed for machine intelligence workloads rather than general-purpose computing. The company built a complete AI compute stack spanning silicon design through datacenter infrastructure, including the Poplar software framework that sits atop the hardware. Graphcore brought the first Wafer-on-Wafer AI processor to market, a packaging approach that addresses the bandwidth and latency constraints inherent in traditional chip-to-chip interconnects for AI workloads. The technical scope encompasses semiconductor engineering, processor design, and AI-specific optimizations across both hardware and software layers. The engineering team works on silicon design, wafer-scale integration technology, and the development of tools for AI model optimization. The software stack includes developer tools designed to extract performance from the IPU architecture, with ongoing work to optimize popular AI models for the platform. This systems-level approach attempts to address the throughput and efficiency bottlenecks that emerge when running large-scale machine learning workloads on conventional processor architectures. Under CEO Nigel Toon's leadership, Graphcore operates with global presence and maintains teams of semiconductor, software, and AI specialists. The company's technology stack includes standard datacenter interfaces (PCIe, DDR, Ethernet) alongside proprietary elements like the IPU and Poplar software. The subsidiary structure under SoftBank provides backing for continued development of both the silicon and the software layers required to compete in AI compute infrastructure, where the trade-offs between custom silicon development costs and performance gains define commercial viability.
Runpod
RunPod operates an end-to-end AI infrastructure platform focused on GPU compute provisioning for model training, inference, and distributed agent orchestration. The platform serves over 500,000 developers, spanning solo practitioners to enterprise teams deploying at scale. Core infrastructure handles compute allocation, orchestration complexity, and operational overhead - positioning itself as accessible infrastructure rather than requiring deep systems expertise from users. The technical stack centers on Go, Python, and TypeScript with containerization through Docker and Kubernetes orchestration on Linux. Engineering domains span distributed systems, GPU compute scheduling, and developer tooling designed to abstract provisioning and scaling mechanics. The company emphasizes reducing operational friction: developers interact with compute resources without managing underlying cluster complexity or infrastructure provisioning bottlenecks. RunPod maintains a remote-first structure with team distribution across the U.S., Canada, Europe, and India. The platform's design reflects a systems-first approach to making GPU compute economically viable and operationally manageable - targeting workloads where cost, reliability, and time-to-deployment constrain AI development cycles.