About

Descript builds a video and audio editing platform that replaces timeline-based manipulation with text-based editing - users cut and rearrange content by editing transcribed text rather than working directly with waveforms or video tracks. The system serves millions of creators, handling the full production pipeline from recording through collaborative editing to publication. Core technical domains span machine learning for transcription and automated design, text-based editing interfaces built on React and TypeScript, and distributed collaboration infrastructure.

The platform's architecture supports both solo and team workflows across time zones, with backend systems running on PostgreSQL and Redis. Technical focus areas include generative AI capabilities that create content from natural language descriptions, automated design systems that reduce manual formatting work, and the fundamental text-to-media mapping that enables document-style editing of temporal content. The team combines creator domain expertise with systems engineering - reflected in stated priorities around human-centered design and products that handle real production constraints rather than demo cases.

The stack centers on TypeScript/React for client interfaces, Python for ML pipelines, and SQL-based data infrastructure with dbt for transformation logic. REST APIs provide integration points. Current engineering emphasis appears weighted toward extending ML capabilities - transcription accuracy, generative features, design automation - alongside the operational complexity of maintaining reliable performance at scale for collaborative real-time editing workflows.

Open roles at Descript

Explore 20 open positions at Descript and find your next opportunity.

DE

Senior Software Engineer, AI Platform and Enablement

Descript

Mission District, San Francisco, California, US or Remote (United States)

$180K – $286K Yearly4d ago
DE

Head of Partnerships

Descript

Mission District, San Francisco, California, US or Remote (United States)

$200K – $260K Yearly2w ago
DE

Director, Customer Success

Descript

California, United States + 1 more (Remote)

$140K – $180K Yearly3w ago
DE

AI Growth Specialist (BDR)

Descript

United States (Remote)

$60K – $120K Yearly3w ago
DE

Director, Sales

Descript

United States (Remote)

$180K – $240K Yearly3w ago
DE

Software Engineer, Editor

Descript

United States (Remote)

$193K – $250K Yearly3w ago
DE

Senior Software Engineer, Client Platform

Descript

United States (Remote)

$195K – $250K Yearly4w ago
DE

Senior Counsel

Descript

San Francisco, California, United States (On-site)

$225K – $275K Yearly4w ago
DE

Senior Accounting Manager

Descript

United States (Remote)

$160K – $185K Yearly4w ago
DE

Product Manager, AI Models

Descript

Mission District, San Francisco, California, US or Remote (Worldwide)

$171K – $235K Yearly4w ago
DE

Infrastructure Engineer

Descript

United States (Remote)

$191K – $250K Yearly4w ago
DE

Product Designer, Growth & Monetization

Descript

San Francisco, California, United States (On-site)

$150K – $215K Yearly1mo ago
DE

Director, Demand Generation

Descript

Mission District, San Francisco, California, US or Remote (United States)

$203K – $230K Yearly1mo ago
DE

Senior Data Scientist, Marketing

Descript

United States (Remote)

$170K – $208K Yearly1mo ago
DE

Engineering Manager, Narrative Editing

Descript

San Francisco, California, United States (On-site)

$222.4K – $261.7K Yearly2mo ago
DE

Account Executive

Descript

United States (Remote)

$100K – $130K Yearly2mo ago
DE

Analytics Engineer

Descript

United States (Remote)

$170K – $208K Yearly2mo ago
DE

Product Manager, Editor

Descript

United States (Remote)

$175K – $265K Yearly3mo ago
DE

Software Engineer, Agent

Descript

United States + 1 more (Remote)

$180K – $286K Yearly3mo ago
DE

Head of Platform Engineering

Descript

San Francisco, California, United States (On-site)

$224K – $296K Yearly3mo ago

Similar companies

PR
236 jobs
VE

Vertiv

Vertiv operates critical digital infrastructure at global scale, delivering end-to-end systems that power and cool data centers, communication networks, and commercial facilities. The company's technical scope spans grid-to-chip power chains, thermal management, and intelligent monitoring - infrastructure that determines operational availability and performance characteristics for compute workloads from edge deployments to hyperscale cloud environments. With decades of domain expertise, Vertiv addresses the operational bottlenecks inherent in maintaining continuous uptime for mission-critical applications. The product portfolio reflects infrastructure constraints across the stack: critical power solutions that maintain grid-to-chip continuity, adaptive cooling systems calibrated for varying thermal loads, and liquid cooling technologies designed specifically for high-density compute environments where traditional air cooling becomes a throughput limiter. Modular prefabricated data centers enable deployment at speed, while advanced battery energy storage systems provide backup power with different trade-offs than traditional UPS architectures. Intelligent monitoring and management systems surface operational visibility across these integrated components. Vertiv serves customers ranging from hyperscale cloud providers managing efficiency at massive scale to local telecommunications networks with different reliability and cost constraints. The company positions its systems around operational excellence and business continuity - measurable outcomes in environments where infrastructure failures directly impact application availability. Digital services and expert support complement hardware deployment, addressing the operational complexity of maintaining critical infrastructure across geographically distributed sites. Under CEO Giordano Albertazzi, the company maintains a hardware-first approach while incorporating software monitoring and management capabilities, with stated emphasis on sustainability goals alongside traditional reliability metrics.

140 jobs
CO

Cohere

Cohere builds enterprise-focused foundational models designed for production deployment with emphasis on security, privacy, and operational trust. Founded in 2019 in Toronto, the company has raised nearly $1 billion and scaled to hundreds of employees worldwide. The technical focus spans semantic search, content generation, and customer experience applications - domains where model reliability and data governance are non-negotiable constraints for enterprise adoption. The company's architecture decisions reflect production realities over research novelty. Models are architected for deployment into regulated environments where data residency, access controls, and audit trails matter as much as accuracy metrics. This positioning addresses the gap between frontier model capabilities and enterprise operational requirements: latency SLAs, cost predictability, and compliance frameworks that prevent many organizations from operationalizing public AI APIs. Cohere Labs has published over 100 papers and built a research community of 4,500+ researchers, signaling ongoing investment in foundational work rather than pure application-layer focus. The team composition skews heavily toward researchers and engineers from academic backgrounds, which maps to the technical challenge space - building models that balance performance, safety constraints, and deployment flexibility across varied enterprise infrastructure.

106 jobs
PI

Pinecone

Pinecone operates a fully managed vector database service designed for production AI applications requiring storage and retrieval of high-dimensional embeddings. The system handles vector search at scale across recommendation systems, semantic search, and related ML-backed services. Founded by Edo Liberty, formerly a research director at AWS with prior experience building custom vector search systems at large scale, the company is credited with establishing the vector database category as a distinct infrastructure layer. The technical stack centers on systems languages - Rust, Go, C++, and Python - with RocksDB as the storage engine and Kubernetes orchestration across AWS, GCP, and Azure. This architecture targets the operational complexity of managing embedding indices, query latency, and throughput at production scale, abstracting infrastructure decisions from engineering teams deploying AI features. The platform serves thousands of companies, positioning itself on ease of deployment and reduced time-to-production for vector-backed applications. The founding principle emphasizes accessibility for engineering teams of varying sizes, evolving the managed service model to minimize operational overhead in running vector workloads. Core focus areas include retrieval performance, reliability under production load, and cost-efficiency trade-offs inherent to high-dimensional search systems.

9 jobs