WA

About

Wabi is the first personal software platform, transforming how people interact with technology through AI-powered mini apps. With $20 million in pre-seed funding, the company has quickly established itself as a pioneer in the User-Generated Software (UGS) movement, enabling anyone to create, share, and remix personalized applications without writing code. Founded by Eugenia Kuyda, former CEO of Replika, Wabi is building what investors call the "YouTube of apps" - a social platform where millions of creators can build and distribute software tailored to individual needs, tastes, and contexts.

The platform represents a fundamental shift from one-size-fits-all applications to truly personal software experiences. Rather than searching for apps that approximately match their needs, users describe their exact requirements in natural language, and Wabi generates custom mini apps optimized for their specific routines, preferences, and life situations. Operating with a lean team of 2-10 employees, Wabi is positioned at the forefront of AI-driven creativity, turning every user into a potential software developer and ushering in a new era where software is made for all of us, by all of us.

Similar companies

NE

Nebius

Nebius is a Nasdaq-listed technology company (NBIS) building full-stack AI infrastructure from its Amsterdam headquarters, with GPU clusters deployed across Europe and the United States. Led by CEO Arkady Volozh, the company operates AI-optimized sustainable data centers - including a facility 60 kilometers from Helsinki and a new Vineland, New Jersey site - and has raised significant capital ($700 million from investors including Accel, NVIDIA, and Orbis). The engineering organization, numbering in the hundreds, maintains deep expertise in world-class infrastructure and runs an in-house AI R&D team that dogfoods the platform to validate it against production ML practitioner requirements. The infrastructure stack spans hyperscaler-scale features with supercomputer-grade performance characteristics. ISEG, Nebius's supercomputer, ranks among the world's most powerful systems. The platform integrates NVIDIA GPUs with NVIDIA InfiniBand networking, exposing workload orchestration through both Kubernetes and Slurm. The operational layer includes standard observability (Prometheus, Grafana), data infrastructure (PostgreSQL, Apache Spark), and ML tooling (MLflow, vLLM, Triton, Ray), with infrastructure-as-code managed via Terraform. This architecture targets the latency, throughput, and reliability requirements of AI training and inference workloads at scale. The company has secured a multi-billion dollar agreement with Microsoft to deliver dedicated AI infrastructure from its Vineland data center. Nebius serves startups, research institutes, and enterprises across healthcare and life sciences, robotics, finance, and entertainment verticals. The technical approach emphasizes production-grade infrastructure that handles the operational complexity of large-scale AI deployments - managing GPU utilization, network bottlenecks, and the cost-performance trade-offs inherent in serving diverse AI workloads from model training through inference serving.

477 jobs
HE

Heidi

Heidi builds an AI Care Partner that automates clinical documentation, form filling, and task management for clinicians worldwide. The system has returned over 18 million hours to clinicians in 18 months and currently supports more than 2 million patient visits weekly across 116 countries and 110+ languages. The company has raised nearly $100 million from Point72, Anthropic, and Blackbird, with a stated goal of halving the time required to deliver patient-first care. The core technical challenge sits at the intersection of multilingual NLP, healthcare informatics, and production reliability at global scale. The system must handle clinical documentation workflows across diverse regulatory environments, languages, and medical specialties while maintaining accuracy and latency requirements that directly impact clinician workflows. The stack spans TypeScript, React, Next.js, and Node.js on the frontend with Python, NestJS, and Express on the backend, using PostgreSQL and MongoDB for persistence and running on GCP and AWS infrastructure. The team includes clinicians, engineers, and designers, with most employees having healthcare backgrounds or direct experience with clinician burnout. Operational philosophy emphasizes shipping small, fast iteration cycles, and tolerance for failure in pursuit of reducing administrative burden. The Australian-based company operates globally with Docker-based deployments and CI/CD pipelines supporting continuous delivery across production environments.

112 jobs
SA

Slingshot AI

Slingshot AI operates as a mental health research lab building a foundation model for psychology and accompanying therapy chatbot. The technical stack spans model development (PyTorch, TensorFlow, JAX) and production infrastructure (GCP, Kubernetes, Cloud Run, gRPC) with client applications in Flutter and Next.js/React. The team combines machine learning engineering, product development, and clinical research expertise, working with therapists and clinicians to align model behavior with therapeutic practices. The core technical challenge is training a domain-specific foundation model that supports user agency in mental health contexts - framing the product as a tool that helps users recognize their own capacity for change rather than an answer-dispensing assistant. This architectural constraint requires careful training objective design and evaluation frameworks that measure therapeutic alignment, not just task completion. The system operates at global scale through partnerships with mental health organizations, though specific throughput or latency metrics are not disclosed. Development follows rapid iteration cycles with emphasis on shipping velocity. The engineering stack reflects production priorities: Rust for performance-critical paths, typed languages (TypeScript, Kotlin) for application logic, and container orchestration for deployment. The team works within the constraint of adapting general-purpose ML infrastructure to specialized clinical requirements while maintaining operational reliability for users seeking mental health support.

1 job
TO

Toma

Toma operates a voice AI platform for automotive dealerships, processing over 1,000,000 calls since launching in 2024. The system handles inbound phone operations - service scheduling, call routing, and follow-up automation - with safeguards designed to manage transfer latency and revenue leakage. Core technical challenge: maintaining conversational quality and intent detection accuracy across high-variance dealership scenarios (service appointments, parts inquiries, sales handoffs) while minimizing false transfers and dropped context. The platform implements transfer triggers, clawback mechanisms for mistimed handoffs, and follow-up alerts when human staff doesn't complete actions, addressing the operational complexity of human-AI transition points in production telephony. Infrastructure runs on AWS with a TypeScript/Next.js frontend, PostgreSQL via Prisma for state management, and tRPC for type-safe API boundaries. The voice AI layer must handle real-time constraints - low-latency speech recognition and synthesis, sub-second intent classification - while managing concurrent call volume and dealership-specific context (inventory, scheduling systems, staff availability). Trade-offs center on model selection for conversational understanding versus inference cost at scale, and the reliability surface area of integrating with legacy dealership management systems. Founded by engineers from Scale AI, Uber, Lyft, and Amazon; backed by Andreessen Horowitz and Y Combinator with $17 million Series A funding. Deployment spans dealerships across the United States, including Pohanka Automotive Group, SCHOMP, Hudson Automotive Group, and Bergey's. Primary bottlenecks likely involve tuning voice models for domain-specific terminology (vehicle makes, service codes, dealership jargon), managing tail latency in transfer decisions where milliseconds impact customer experience, and evaluating conversational success beyond simple call completion - did the AI correctly capture appointment details, route urgency appropriately, preserve customer satisfaction? The system's value proposition hinges on converting missed calls and staff bottlenecks into captured revenue, which requires high precision on intent classification and low false-negative rates on transfer triggers to avoid revenue loss from mishandled interactions.

1 job