About

Replit operates a web-based code editor and multiplayer computing environment used by millions for collaborative software development. The platform eliminates traditional barriers to application creation through natural language interfaces, allowing users to build applications without conventional development workflows - demonstrated by architectural decisions like removing the save button from their editor. The multiplayer environment serves as infrastructure for experimentation, sharing, and collaborative growth at scale.

The company measures success by the number of people empowered to create software rather than vanity metrics, reflecting a systems-level focus on removing bottlenecks in developer onboarding and productivity. Technical decisions prioritize shipping velocity and operational autonomy: the culture emphasizes extreme ownership, radical bets, and bias toward action. Engineers operate with the latitude to pursue emergent ideas and question established patterns when friction appears in the development loop.

The platform's architecture supports collaborative coding workflows at scale, handling millions of concurrent users across a shared computing environment. This requires managing trade-offs between multi-tenancy constraints, latency in collaborative editing, and operational complexity of maintaining compute resources for distributed development sessions. The technical focus centers on developer tools, web-based editing infrastructure, and the reliability challenges of real-time collaborative computing.

Open roles at Replit

Explore 63 open positions at Replit and find your next opportunity.

RE

Manager, Workforce Management & Support Insights

Replit

Foster City, California, United States (Hybrid)

$150K – $200K Yearly3w ago
RE

Sr. Manager, Business Development

Replit

Salt Lake City, Utah, United States (Hybrid)

$200K – $240K Yearly3w ago
RE

Partnerships Lead, Startups

Replit

Foster City, California, United States (Hybrid)

$185K – $240K Yearly1mo ago
RE

Offensive Security Engineer

Replit

Foster City, California, United States (Hybrid)

$188K – $313K Yearly1mo ago
RE

Premium Support Engineer

Replit

New York, New York, United States (On-site)

$185K – $210K Yearly1mo ago
RE

Director Sales Ops

Replit

Foster City, California, United States (Hybrid)

$220K – $300K Yearly1mo ago
RE

Partner Engineer

Replit

Foster City, California, United States (Hybrid)

$150K – $200K Yearly1mo ago
RE

Business Development Representative

Replit

Salt Lake City, Utah, United States (Hybrid)

$72K – $85K Yearly1mo ago
RE

Mid-Market Account Executive

Replit

Salt Lake City, Utah, United States (On-site)

$180K – $240K Yearly1mo ago
RE

Deal Desk Analyst

Replit

Foster City, California, United States (Hybrid)

$95K – $120K Yearly1mo ago
RE

VP of Consumer & Brand Marketing

Replit

Foster City, California, United States (Hybrid)

1mo ago
RE

Senior Product Engineer, Product Platform

Replit

Foster City, California, United States (Hybrid)

$225K – $320K Yearly1mo ago
RE

Senior Field Marketing Manager

Replit

Foster City, California, United States (Hybrid)

$120K – $180K Yearly1mo ago
RE

Engineering Manager, UX

Replit

Foster City, California, United States (Hybrid)

$200K – $300K Yearly2mo ago
RE

IT Administrator - Endpoint Platforms

Replit

Foster City, California, United States (Hybrid)

$95K – $135K Yearly2mo ago
RE

GRC Lead (Governance, Risk, and Compliance)

Replit

Foster City, California, US

$208K – $300K Yearly2mo ago
RE

Senior Lifecycle Marketing Manager

Replit

Foster City, California, United States (Hybrid)

$165K – $215K Yearly2mo ago
RE

Senior Growth Marketing Manager, Mobile & Conversions

Replit

Foster City, California, United States (Hybrid)

$165K – $215K Yearly2mo ago
RE

Support Engineer I (Weekend Shift)

Replit

Foster City, California, US

$110K – $140K Yearly2mo ago
RE

Staff Product Designer, Visual Design

Replit

Foster City, California, United States (Hybrid)

$200K – $250K Yearly2mo ago

Similar companies

DE

Decagon

Decagon builds a conversational AI platform designed to replace or augment legacy customer support systems by deploying intelligent AI agents across chat, email, and voice channels. The company positions its technology as infrastructure for delivering concierge-level customer experiences at scale, targeting brands looking to support, onboard, and retain customers without proportional headcount growth. Led by CEO Jesse Zhang and founded by serial entrepreneurs, Decagon operates from the US and focuses on addressing the operational constraints of traditional customer support systems. The platform's core technical approach centers on Agent Operating Procedures (AOPs), a natural-language-to-code compilation system that allows non-technical users to define agent behavior while preserving technical team control over guardrails, integrations, and versioning. This design addresses a common trade-off in AI tooling: enabling rapid iteration by domain experts without sacrificing reliability controls or introducing configuration drift. The agent orchestration layer spans multiple channels and claims to amplify CX team impact by 10x, though specific benchmarks around latency, accuracy, or failure rate are not publicly detailed. Decagon's technical domains span conversational AI, natural language processing, multichannel messaging infrastructure, and automation systems. The platform emphasizes runtime guardrails and version management as first-class concerns, reflecting a systems-oriented approach to production deployment. The company claims to deliver always-on, personalized service, positioning its agents as operational infrastructure rather than experimental tooling. For engineers evaluating opportunities, the technical challenges likely involve scaling context-rich, stateful interactions across channels while maintaining consistency, handling edge cases in natural language understanding, and building abstraction layers that balance expressiveness with safety.

89 jobs
MO

Modal

Modal operates a serverless compute platform designed to minimize infrastructure friction for ML inference, fine-tuning, and batch workloads. The platform provides instant GPU access with usage-based pricing, targeting teams that need to ship compute-intensive applications without managing scheduling, container orchestration, or resource allocation. The architecture is built on custom infrastructure components - an in-house file system, container runtime, scheduler, and image builder - optimized for the latency and throughput characteristics of AI workloads. The technical stack spans Python, Rust, and Go at the systems level, with PyTorch, CUDA, vLLM, and TensorRT support for ML frameworks. This reflects prioritization of both developer ergonomics (Python interface) and low-level performance (Rust/Go for runtime components). The custom infrastructure signals investment in controlling the full vertical - from container initialization through GPU scheduling - rather than composing existing orchestration layers. The team operates across New York, Stockholm, and San Francisco, and includes creators of open-source projects like Seaborn and Luigi, alongside academic researchers and engineers with experience building production systems. The platform positions itself around developer experience as a core constraint, with infrastructure complexity abstracted to reduce operational overhead for data and AI teams.

28 jobs
RE

Reka

Reka builds unified multimodal foundation models that process text, images, video, and audio. The company's core technical focus is modeling the physical world through systems that handle perception, reasoning, and action across modalities. The team includes researchers and engineers from Google DeepMind and Facebook AI Research working on inference-critical domains including GPU performance engineering, computer vision, audio processing, and natural language understanding. The technical stack centers on Python, PyTorch, and JAX for model development, with CUDA and C++ for performance-critical components. Infrastructure runs on Kubernetes and Slurm for orchestration and job scheduling. Engineering roles emphasize end-to-end ownership - individuals work across the stack from model architecture through deployment, addressing bottlenecks in latency, throughput, and operational complexity at production scale. Reka operates remote-first, aggregating global talent into a distributed systems organization. The work targets enterprise and organizational deployments where multimodal capabilities must meet reliability and cost constraints. Team structure reflects early-stage dynamics: engineers wear multiple hats, and technical decisions directly shape product capabilities and production characteristics.

3 jobs
TO

Toma

Toma operates a voice AI platform for automotive dealerships, processing over 1,000,000 calls since launching in 2024. The system handles inbound phone operations - service scheduling, call routing, and follow-up automation - with safeguards designed to manage transfer latency and revenue leakage. Core technical challenge: maintaining conversational quality and intent detection accuracy across high-variance dealership scenarios (service appointments, parts inquiries, sales handoffs) while minimizing false transfers and dropped context. The platform implements transfer triggers, clawback mechanisms for mistimed handoffs, and follow-up alerts when human staff doesn't complete actions, addressing the operational complexity of human-AI transition points in production telephony. Infrastructure runs on AWS with a TypeScript/Next.js frontend, PostgreSQL via Prisma for state management, and tRPC for type-safe API boundaries. The voice AI layer must handle real-time constraints - low-latency speech recognition and synthesis, sub-second intent classification - while managing concurrent call volume and dealership-specific context (inventory, scheduling systems, staff availability). Trade-offs center on model selection for conversational understanding versus inference cost at scale, and the reliability surface area of integrating with legacy dealership management systems. Founded by engineers from Scale AI, Uber, Lyft, and Amazon; backed by Andreessen Horowitz and Y Combinator with $17 million Series A funding. Deployment spans dealerships across the United States, including Pohanka Automotive Group, SCHOMP, Hudson Automotive Group, and Bergey's. Primary bottlenecks likely involve tuning voice models for domain-specific terminology (vehicle makes, service codes, dealership jargon), managing tail latency in transfer decisions where milliseconds impact customer experience, and evaluating conversational success beyond simple call completion - did the AI correctly capture appointment details, route urgency appropriately, preserve customer satisfaction? The system's value proposition hinges on converting missed calls and staff bottlenecks into captured revenue, which requires high precision on intent classification and low false-negative rates on transfer triggers to avoid revenue loss from mishandled interactions.

1 job