1. Home
  2. AI Companies
  3. Slingshot AI
SA

Slingshot AI

About

Slingshot AI operates as a mental health research lab building a foundation model for psychology and accompanying therapy chatbot. The technical stack spans model development (PyTorch, TensorFlow, JAX) and production infrastructure (GCP, Kubernetes, Cloud Run, gRPC) with client applications in Flutter and Next.js/React. The team combines machine learning engineering, product development, and clinical research expertise, working with therapists and clinicians to align model behavior with therapeutic practices.

The core technical challenge is training a domain-specific foundation model that supports user agency in mental health contexts - framing the product as a tool that helps users recognize their own capacity for change rather than an answer-dispensing assistant. This architectural constraint requires careful training objective design and evaluation frameworks that measure therapeutic alignment, not just task completion. The system operates at global scale through partnerships with mental health organizations, though specific throughput or latency metrics are not disclosed.

Development follows rapid iteration cycles with emphasis on shipping velocity. The engineering stack reflects production priorities: Rust for performance-critical paths, typed languages (TypeScript, Kotlin) for application logic, and container orchestration for deployment. The team works within the constraint of adapting general-purpose ML infrastructure to specialized clinical requirements while maintaining operational reliability for users seeking mental health support.

Open roles at Slingshot AI

Explore 1 open positions at Slingshot AI and find your next opportunity.

SA

Engineering Manager

Slingshot AI

London, England, United Kingdom (On-site)

3mo ago

Similar companies

RA

Reflection AI

Reflection AI develops open foundation models targeting superintelligent autonomous systems, with current work focused on autonomous coding as a path to broader cognitive automation. The company combines reinforcement learning and large language models to build systems capable of handling most cognitive work on a computer, positioning autonomous code generation as the bottleneck to unlock that capability. The team includes contributors to AlphaGo, AlphaZero, PaLM, GPT-4, and Gemini, bringing production experience across game-playing RL systems and frontier language models. This background suggests familiarity with the trade-offs in training large-scale models - compute efficiency, sample complexity, and the operational challenges of running RL at scale alongside supervised pretraining. Reflection's stated objective centers on keeping superintelligence open and accessible through open foundation models. For inference practitioners, this implies potential work on model architectures, training infrastructure, and deployment systems designed for broad distribution rather than proprietary deployment. The autonomous coding focus suggests evaluation infrastructure for code generation, likely including metrics beyond pass@k - compilation rates, execution correctness, and performance characteristics of generated code under real-world constraints.

49 jobs
HA

Hippocratic AI

Hippocratic AI develops safety-focused large language models purpose-built for healthcare applications, with its flagship product Polaris deployed across over 150 million clinical patient interactions with zero reported safety issues. The company has raised $404 million at a $3.5 billion valuation to address the global shortage of 15 million healthcare workers through AI-powered clinical automation. Infrastructure runs on NVIDIA compute deployed via AWS, focusing on low-risk, non-diagnostic tasks where latency and reliability constraints differ from acute care workflows. Polaris implements a constellation architecture that coordinates multiple specialized agents rather than relying on a monolithic model - an approach that trades orchestration complexity for narrower failure modes in production. The system handles chronic care follow-ups, medication reminders, and patient engagement workflows where diagnostic responsibility remains with clinicians. The company has developed over 1,000 AI healthcare agents using retrieval-augmented generation to ground responses in clinical protocols, though specific latency profiles, throughput characteristics, and the operational overhead of managing agent deployments at scale remain publicly undisclosed. The technical approach prioritizes safety constraints inherent to healthcare applications: avoiding diagnostic or prescriptive capabilities, maintaining audit trails for clinical conversations, and operating within well-defined task boundaries. For engineers evaluating production ML systems, the trade-offs center on the constellation architecture's ability to handle distribution shift across patient populations versus the operational complexity of maintaining multiple specialized models. Led by CEO Munjal Shah, the company positions itself across the entire healthcare industry vertical, though deployment details beyond the AWS/NVIDIA stack and the distinction between research benchmarks and production performance in actual clinical settings warrant closer examination for those building similar safety-critical inference systems.

47 jobs
MA

Mirelo AI

Mirelo AI builds foundation models for generating synchronized audio for video content, targeting the latency and quality bottleneck in audio-for-video workflows. Founded in 2023 in Berlin, the company raised $41 million in seed funding co-led by Index Ventures and Andreessen Horowitz. Their models generate synchronized sound effects in seconds rather than the hours typically required for manual sound design, addressing production throughput constraints across gaming, film, social media, and broader visual content verticals. The technical stack centers on PyTorch with transformer architectures, optimized for H100 and H200 GPUs using Nsight profiling and SLURM for cluster orchestration. The team sources from Google Brain, Amazon, Meta FAIR, Disney, ETH Zürich, and Max Planck Institutes, combining AI research depth with domain expertise from musicians and product specialists. Co-founder and CEO CJ Simon-Gabriel previously worked at AWS Labs, where the founding team originated. The core technical challenge is tight audio-visual synchronization at generation time - a constraint that spans model architecture design, latency optimization, and evaluation methodology. Production systems must handle variable-length video inputs while maintaining temporal coherence across generated audio, requiring careful trade-offs between generation speed, output quality, and computational cost. The company positions its models as infrastructure for visual content pipelines, treating audio generation as a systems problem rather than a standalone creative tool.

8 jobs