LO

About

Lovable operates an AI platform that converts conversational prompts into production web applications, targeting users without programming experience. The system emerged from GPT Engineer, an open-source code generation tool developed mid-2023 that became one of the fastest-growing GitHub repositories by demonstrating LLM capabilities for functional code synthesis from natural language. The company launched as GPT Engineer App in November 2023 and rebranded to Lovable in December 2024.

The platform's technical stack runs on React frontends with Golang and Rust backend services, deployed across GCP, AWS, and Cloudflare infrastructure. Observability relies on Grafana and OTEL; CI/CD through GitHub Actions; infrastructure as Terraform. Application services integrate Supabase for data persistence and Stripe for payments, with Vite handling build tooling. The architecture supports end-to-end web application generation from prompt input through deployment.

The system's core constraint is translating arbitrary user intent into coherent application logic and UI without iterative debugging cycles typical of manual development. This requires handling ambiguous specifications, generating maintainable code structures, and managing state complexity across generated components - problems that scale poorly with application size. Production readiness depends on generated code quality, runtime reliability of synthesized logic, and operational costs of inference at scale. The open-source foundation provides transparency into generation capabilities and limitations.

Open roles at Lovable

Explore 37 open positions at Lovable and find your next opportunity.

LO

Head of Strategic Operations

Lovable

Stockholm, Stockholm, Sweden (On-site)

3w ago
LO

Head of Channel Sales

Lovable

California, United States + 1 more (Remote)

3w ago
LO

Partnership Manager, Product

Lovable

San Francisco, California, United States (On-site)

3w ago
LO

Senior Solutions Marketer

Lovable

San Francisco, California, United States (Hybrid)

3w ago
LO

Enterprise Solutions Marketer

Lovable

United States + 1 more (Remote)

3w ago
LO

Penetration Tester

Lovable

Stockholm, Stockholm, Sweden (On-site)

3w ago
LO

Partnership Manager, Platform

Lovable

Massachusetts, United States + 1 more (Remote)

3w ago
LO

Channel Manager, GSI/Alliances

Lovable

California, United States + 2 more (Remote)

3w ago
LO

FullStack Engineer - Product Security

Lovable

Stockholm, Stockholm, Sweden (On-site)

3w ago
LO

Partnership Manager, Technology

Lovable

San Francisco, California, United States (On-site)

3w ago
LO

IT Specialist

Lovable

Stockholm, Sweden (On-site)

3w ago
LO
LO

AI Engineer

Lovable

Stockholm, Stockholm, Sweden (On-site)

2mo ago
LO

Finance & BizOps, Strategic Partnerships

Lovable

San Francisco, California, United States (On-site)

2mo ago
LO

Deployment Strategist

Lovable

United States or Remote (United States + 1 more)

2mo ago
LO

Solutions Architect

Lovable

Stockholm, Stockholm, Sweden (On-site)

2mo ago
LO

Head of Community Marketing

Lovable

New York, United States (On-site)

2mo ago
LO

Brand Designer

Lovable

Stockholm, Stockholm, Sweden (On-site)

3mo ago
LO

Motion Designer

Lovable

Stockholm, Stockholm, Sweden (On-site)

3mo ago
LO

UX Researcher

Lovable

Stockholm, Stockholm, Sweden (On-site)

3mo ago

Similar companies

TE

Tenstorrent

Tenstorrent builds computers for AI from the ground up: architecture, silicon, and software as a unified system. The company develops AI Graph Processors and high-performance RISC-V CPUs, packaged as configurable chiplets. Under the technical leadership of CEO Jim Keller, the engineering organization spans North America, Europe, and Asia, drawing from backgrounds at AMD, Tesla, and Intel. The approach centers on eliminating vendor lock-in through open-source tooling - TT-Forge (compiler), tt-metalium (runtime), and fully open RISC-V CPU designs - paired with hardware-software co-design where both teams work in tight collaboration. The technical stack reflects production systems priorities: RISC-V cores, UCIe interconnect, PCIe interfaces, and RTL design in Verilog/SystemVerilog for silicon. The software layer includes C++ and Python for core development, MLIR for compiler infrastructure, and Linux-based deployment (RHEL, Ubuntu) managed through Ansible. Engineers ship regularly in a distributed organization structured to maintain startup iteration speed while operating at global scale. The architecture work spans SoC design, AI acceleration, compiler optimization, and the operational complexity of coordinating hardware and software release cycles. Tenstorrent's model prioritizes technical depth over presentation: hardware and software engineers collaborate directly on bottlenecks in inference throughput, latency characteristics, and cost per operation. The open-source commitment extends beyond software libraries to actual CPU designs, creating evaluation paths without procurement barriers. For engineers focused on inference systems, the work involves compiler optimization against real silicon constraints, runtime performance tuning across the stack, and architectural decisions that propagate from chiplet design through model deployment.

169 jobs
RE

Replit

Replit operates a web-based code editor and multiplayer computing environment used by millions for collaborative software development. The platform eliminates traditional barriers to application creation through natural language interfaces, allowing users to build applications without conventional development workflows - demonstrated by architectural decisions like removing the save button from their editor. The multiplayer environment serves as infrastructure for experimentation, sharing, and collaborative growth at scale. The company measures success by the number of people empowered to create software rather than vanity metrics, reflecting a systems-level focus on removing bottlenecks in developer onboarding and productivity. Technical decisions prioritize shipping velocity and operational autonomy: the culture emphasizes extreme ownership, radical bets, and bias toward action. Engineers operate with the latitude to pursue emergent ideas and question established patterns when friction appears in the development loop. The platform's architecture supports collaborative coding workflows at scale, handling millions of concurrent users across a shared computing environment. This requires managing trade-offs between multi-tenancy constraints, latency in collaborative editing, and operational complexity of maintaining compute resources for distributed development sessions. The technical focus centers on developer tools, web-based editing infrastructure, and the reliability challenges of real-time collaborative computing.

76 jobs
FU

FurtherAI

FurtherAI builds domain-specific AI infrastructure for commercial insurance workflows, targeting the document-heavy operational bottlenecks that dominate underwriting, claims processing, and policy comparison work. Their AI Workspace handles submission intake, underwriting audits, and compliance checks by parsing and normalizing unstructured data from broker letters, property schedules, Accord forms, and loss histories. The system reports 95–97% accuracy on these tasks compared to 70–77% for manual processing, addressing a workflow layer where precision directly impacts underwriting decisions and operational throughput. The platform is deployed by insurers, reinsurers, MGAs, and brokers writing over $15B in premiums across all 50 U.S. states. Technical focus areas include document understanding, NLP for insurance-specific language and formats, data normalization pipelines, and workflow automation that integrates with existing carrier systems. The core technical challenge is reliability at scale across heterogeneous document types and insurance product lines, where edge cases in policy language or submission format can propagate downstream into underwriting errors or compliance gaps. FurtherAI operates in a sector facing projected workforce reduction of 400,000 by 2026, with approximately 3 million insurance professionals currently handling manual document processing. The system architecture must handle the latency requirements of underwriting timelines while maintaining accuracy thresholds that meet regulatory and risk management standards. Key operational trade-offs include throughput on batch processing of submissions versus real-time responsiveness for urgent underwriting decisions, and the cost-accuracy frontier for document parsing models across different insurance product complexities.

18 jobs