PR

Prolific

Open roles at Prolific

Explore 118 open positions at Prolific and find your next opportunity.

PR

AI Trainer – Psychologists (UK)

Prolific

United Kingdom (Remote)

Up to £50 Hourly6d ago
PR

Security & Compliance Lead

Prolific

United Kingdom (Remote)

6d ago
PR

AI Trainer - Vietnam

Prolific

Vietnam (Remote)

From $20 Hourly6d ago
PR

AI Training - Accountants (UK)

Prolific

United Kingdom (Remote)

Up to £60 Hourly6d ago
PR

AI Training - Accountants (EST)

Prolific

United States (Remote)

Up to $75 Hourly6d ago
PR

AI Training - Machine Learning Specialist (UK)

Prolific

United Kingdom (Remote)

From $150 Hourly6d ago
PR

AI Training Specialist - South UK

Prolific

United Kingdom (Remote)

Up to £25 Hourly6d ago
PR

AI Training - Machine Learning Specialist (EST)

Prolific

Worldwide (Remote)

Up to $150 Hourly6d ago
PR

Data Science Manager

Prolific

New York, New York, United States (Hybrid)

6d ago
PR
PR

AI Training Specialist - North UK

Prolific

United Kingdom (Remote)

Up to £25 Hourly6d ago
PR

Senior Security Engineer

Prolific

United Kingdom (Remote)

6d ago
PR

AI Training - Research Scientist (CAN)

Prolific

Canada (Remote)

Up to £50 Hourly6d ago
PR

AI Trainer – Psychologists (CAN)

Prolific

Canada (Remote)

Up to C$50 Hourly6d ago
PR

AI Trainer - Personal Finance Advisors (Remote)

Prolific

Worldwide (Remote)

$40 – $75 Hourly6d ago
PR
PR

Revenue Operations Lead - AI Business

Prolific

United States (Remote)

$120K – $150K Yearly6d ago
PR

AI Training - Research Scientist (PST)

Prolific

Worldwide (Remote)

Up to $50 Hourly6d ago
PR

AI Trainer - Medical Doctors (TX)

Prolific

Texas, United States (Remote)

$80 – $150 Hourly6d ago
PR

Similar companies

GR

Graphcore

Graphcore, a British semiconductor company and wholly owned subsidiary of SoftBank Group, develops specialized AI compute hardware centered on its Intelligence Processing Unit (IPU). The IPU represents a processor architecture specifically designed for machine intelligence workloads rather than general-purpose computing. The company built a complete AI compute stack spanning silicon design through datacenter infrastructure, including the Poplar software framework that sits atop the hardware. Graphcore brought the first Wafer-on-Wafer AI processor to market, a packaging approach that addresses the bandwidth and latency constraints inherent in traditional chip-to-chip interconnects for AI workloads. The technical scope encompasses semiconductor engineering, processor design, and AI-specific optimizations across both hardware and software layers. The engineering team works on silicon design, wafer-scale integration technology, and the development of tools for AI model optimization. The software stack includes developer tools designed to extract performance from the IPU architecture, with ongoing work to optimize popular AI models for the platform. This systems-level approach attempts to address the throughput and efficiency bottlenecks that emerge when running large-scale machine learning workloads on conventional processor architectures. Under CEO Nigel Toon's leadership, Graphcore operates with global presence and maintains teams of semiconductor, software, and AI specialists. The company's technology stack includes standard datacenter interfaces (PCIe, DDR, Ethernet) alongside proprietary elements like the IPU and Poplar software. The subsidiary structure under SoftBank provides backing for continued development of both the silicon and the software layers required to compete in AI compute infrastructure, where the trade-offs between custom silicon development costs and performance gains define commercial viability.

197 jobs
RU

Runpod

RunPod operates an end-to-end AI infrastructure platform focused on GPU compute provisioning for model training, inference, and distributed agent orchestration. The platform serves over 500,000 developers, spanning solo practitioners to enterprise teams deploying at scale. Core infrastructure handles compute allocation, orchestration complexity, and operational overhead - positioning itself as accessible infrastructure rather than requiring deep systems expertise from users. The technical stack centers on Go, Python, and TypeScript with containerization through Docker and Kubernetes orchestration on Linux. Engineering domains span distributed systems, GPU compute scheduling, and developer tooling designed to abstract provisioning and scaling mechanics. The company emphasizes reducing operational friction: developers interact with compute resources without managing underlying cluster complexity or infrastructure provisioning bottlenecks. RunPod maintains a remote-first structure with team distribution across the U.S., Canada, Europe, and India. The platform's design reflects a systems-first approach to making GPU compute economically viable and operationally manageable - targeting workloads where cost, reliability, and time-to-deployment constrain AI development cycles.

26 jobs