1. Home
  2. AI Companies
  3. Tenstorrent

About

Tenstorrent builds computers for AI from the ground up: architecture, silicon, and software as a unified system. The company develops AI Graph Processors and high-performance RISC-V CPUs, packaged as configurable chiplets. Under the technical leadership of CEO Jim Keller, the engineering organization spans North America, Europe, and Asia, drawing from backgrounds at AMD, Tesla, and Intel. The approach centers on eliminating vendor lock-in through open-source tooling - TT-Forge (compiler), tt-metalium (runtime), and fully open RISC-V CPU designs - paired with hardware-software co-design where both teams work in tight collaboration.

The technical stack reflects production systems priorities: RISC-V cores, UCIe interconnect, PCIe interfaces, and RTL design in Verilog/SystemVerilog for silicon. The software layer includes C++ and Python for core development, MLIR for compiler infrastructure, and Linux-based deployment (RHEL, Ubuntu) managed through Ansible. Engineers ship regularly in a distributed organization structured to maintain startup iteration speed while operating at global scale. The architecture work spans SoC design, AI acceleration, compiler optimization, and the operational complexity of coordinating hardware and software release cycles.

Tenstorrent's model prioritizes technical depth over presentation: hardware and software engineers collaborate directly on bottlenecks in inference throughput, latency characteristics, and cost per operation. The open-source commitment extends beyond software libraries to actual CPU designs, creating evaluation paths without procurement barriers. For engineers focused on inference systems, the work involves compiler optimization against real silicon constraints, runtime performance tuning across the stack, and architectural decisions that propagate from chiplet design through model deployment.

Open roles at Tenstorrent

Explore 107 open positions at Tenstorrent and find your next opportunity.

TE

Static Timing Analysis (STA) Methodology Engineer

Tenstorrent

Santa Clara, California, United States (Hybrid)

$100K – $500K Yearly1mo ago
TE

High Speed AI Interconnect Signal Integrity Engineer

Tenstorrent

Austin, Texas, United States (On-site)

$100K – $500K Yearly1mo ago
TE

Software Architect, Automotive Robotics

Tenstorrent

Bavaria, Germany + 1 more (Remote)

1mo ago
TE

Customer TPM, RISCV

Tenstorrent

Austin, Texas, United States (Hybrid)

$100K – $500K Yearly1mo ago
TE

Field Application Engineer - AI Systems & Solutions

Tenstorrent

München, Bavaria, Germany (Hybrid)

1mo ago
TE

Staff Engineer, Software Release and Packaging - RISC V

Tenstorrent

United States + 2 more (Remote)

$100K – $500K Yearly1mo ago
TE

Sr.Staff, Design Verification - CPU Cluster / SoC

Tenstorrent

Bengaluru, Karnataka, India (Hybrid)

1mo ago
TE

Sr Staff Engineer, CPU System Microarchitect

Tenstorrent

Bengaluru, Karnataka, India (Hybrid)

1mo ago
TE

Emulation Engineer, Automotive Robotics

Tenstorrent

Germany (Remote)

1mo ago
TE

SOC Architect

Tenstorrent

Germany (Remote)

1mo ago
TE

Network ASIC Designer

Tenstorrent

North America (Remote)

$100K – $500K Yearly1mo ago
TE

Software Engineer, TT-Distributed

Tenstorrent

Santa Clara, California, United States (Hybrid)

$100K – $500K Yearly1mo ago
TE

RTL Engineer, Automotive Robotics

Tenstorrent

Germany (Remote)

1mo ago
TE

Physical Design Engineer: Die-to-Die Interface (RTL to GDSII)

Tenstorrent

United States (Remote)

$100K – $500K Yearly1mo ago
TE

Manager, Data Center & Lab Deployments

Tenstorrent

Toronto, Ontario, Canada (Hybrid)

$100K – $500K Yearly2mo ago
TE

Staff Analog Design Engineer

Tenstorrent

United States (Remote)

$100K – $500K Yearly2mo ago

Similar companies

HA

HappyRobot

HappyRobot is an AI workforce platform founded in 2023 that builds autonomous agents to handle end-to-end operational work across phone, email, messaging, and documents. The company focuses on logistics and industrial operations - supply chains, freight, and businesses that move physical goods - where complex, patterned work spans multiple communication channels and document formats. Rather than augmenting human workflows, HappyRobot's system is designed to own complete tasks autonomously, operating as an AI-native OS for operations. The platform has been deployed across over 150 enterprise customers, including DHL and Ryder, and the company has raised $62 million from investors including Y Combinator and Andreessen Horowitz. The technical approach centers on building AI workers that can manage the operational complexity inherent in real-economy businesses: inbound calls that require looking up order status across internal systems, email threads with multi-party coordination, document processing that feeds into downstream workflows. The platform integrates natural language processing for conversational interfaces with document automation capabilities, handling the operational load that typically requires human judgment and context-switching. The stack is built on TypeScript, Next.js, and Go, suggesting a focus on both frontend orchestration and backend performance for production-scale operations. The founding team - Pablo Palafox, Javier Palafox, and Luis Paarup - brings backgrounds in engineering and logistics, positioning the company to understand both the technical constraints of building reliable AI systems and the operational bottlenecks in target industries. The company's positioning as AI-native reflects a systems-level bet: that automating operations requires rethinking the entire operational stack rather than bolting AI onto existing software workflows. For engineers, the work involves building agents that handle reliability and failure modes in production environments where downtime has direct business impact - missed shipments, delayed communications, operational backlogs.

95 jobs
DE

Decagon

Decagon builds a conversational AI platform designed to replace or augment legacy customer support systems by deploying intelligent AI agents across chat, email, and voice channels. The company positions its technology as infrastructure for delivering concierge-level customer experiences at scale, targeting brands looking to support, onboard, and retain customers without proportional headcount growth. Led by CEO Jesse Zhang and founded by serial entrepreneurs, Decagon operates from the US and focuses on addressing the operational constraints of traditional customer support systems. The platform's core technical approach centers on Agent Operating Procedures (AOPs), a natural-language-to-code compilation system that allows non-technical users to define agent behavior while preserving technical team control over guardrails, integrations, and versioning. This design addresses a common trade-off in AI tooling: enabling rapid iteration by domain experts without sacrificing reliability controls or introducing configuration drift. The agent orchestration layer spans multiple channels and claims to amplify CX team impact by 10x, though specific benchmarks around latency, accuracy, or failure rate are not publicly detailed. Decagon's technical domains span conversational AI, natural language processing, multichannel messaging infrastructure, and automation systems. The platform emphasizes runtime guardrails and version management as first-class concerns, reflecting a systems-oriented approach to production deployment. The company claims to deliver always-on, personalized service, positioning its agents as operational infrastructure rather than experimental tooling. For engineers evaluating opportunities, the technical challenges likely involve scaling context-rich, stateful interactions across channels while maintaining consistency, handling edge cases in natural language understanding, and building abstraction layers that balance expressiveness with safety.

89 jobs
MO

Modal

Modal operates a serverless compute platform designed to minimize infrastructure friction for ML inference, fine-tuning, and batch workloads. The platform provides instant GPU access with usage-based pricing, targeting teams that need to ship compute-intensive applications without managing scheduling, container orchestration, or resource allocation. The architecture is built on custom infrastructure components - an in-house file system, container runtime, scheduler, and image builder - optimized for the latency and throughput characteristics of AI workloads. The technical stack spans Python, Rust, and Go at the systems level, with PyTorch, CUDA, vLLM, and TensorRT support for ML frameworks. This reflects prioritization of both developer ergonomics (Python interface) and low-level performance (Rust/Go for runtime components). The custom infrastructure signals investment in controlling the full vertical - from container initialization through GPU scheduling - rather than composing existing orchestration layers. The team operates across New York, Stockholm, and San Francisco, and includes creators of open-source projects like Seaborn and Luigi, alongside academic researchers and engineers with experience building production systems. The platform positions itself around developer experience as a core constraint, with infrastructure complexity abstracted to reduce operational overhead for data and AI teams.

28 jobs
MA

Mirelo AI

Mirelo AI builds foundation models for generating synchronized audio for video content, targeting the latency and quality bottleneck in audio-for-video workflows. Founded in 2023 in Berlin, the company raised $41 million in seed funding co-led by Index Ventures and Andreessen Horowitz. Their models generate synchronized sound effects in seconds rather than the hours typically required for manual sound design, addressing production throughput constraints across gaming, film, social media, and broader visual content verticals. The technical stack centers on PyTorch with transformer architectures, optimized for H100 and H200 GPUs using Nsight profiling and SLURM for cluster orchestration. The team sources from Google Brain, Amazon, Meta FAIR, Disney, ETH Zürich, and Max Planck Institutes, combining AI research depth with domain expertise from musicians and product specialists. Co-founder and CEO CJ Simon-Gabriel previously worked at AWS Labs, where the founding team originated. The core technical challenge is tight audio-visual synchronization at generation time - a constraint that spans model architecture design, latency optimization, and evaluation methodology. Production systems must handle variable-length video inputs while maintaining temporal coherence across generated audio, requiring careful trade-offs between generation speed, output quality, and computational cost. The company positions its models as infrastructure for visual content pipelines, treating audio generation as a systems problem rather than a standalone creative tool.

8 jobs