- Home
- AI Companies
- Cognition
Cognition
About
This company hasn't shared a description yet.
Similar companies
Thinking Machines Lab
Thinking Machines Lab is a 2025-founded AI research and product company led by Mira Murati, former CTO of OpenAI. The organization addresses a concentration problem: training methods for frontier AI systems have remained largely confined to top labs, constraining public understanding and limiting users' ability to customize systems to specific needs. The team - comprising scientists and engineers who previously built ChatGPT, Character.ai, and contributed to PyTorch - focuses on making AI systems more widely understood, customizable, and generally capable through open science publications and code releases. The company's technical work centers on multimodal systems designed to adapt across the full spectrum of human expertise, with an explicit architectural preference for human–AI collaboration over full autonomy. Their stack includes Python, Rust, PyTorch, React/TypeScript, Kubernetes, and Spark. Development priorities span training and analysis of frontier models, multimodal system design, and foundational ML framework work - reflecting the team's prior experience building widely-deployed products and infrastructure. The operational model emphasizes open science: research findings and implementations are released publicly rather than held proprietary. This approach targets both the customizability bottleneck - where users cannot effectively tune systems to domain-specific requirements - and the knowledge distribution problem that limits informed discourse about frontier model development. Product outputs include multimodal systems and published research artifacts alongside the methodological contributions inherent in their open release practice.
Reflection AI
Reflection AI develops open foundation models targeting superintelligent autonomous systems, with current work focused on autonomous coding as a path to broader cognitive automation. The company combines reinforcement learning and large language models to build systems capable of handling most cognitive work on a computer, positioning autonomous code generation as the bottleneck to unlock that capability. The team includes contributors to AlphaGo, AlphaZero, PaLM, GPT-4, and Gemini, bringing production experience across game-playing RL systems and frontier language models. This background suggests familiarity with the trade-offs in training large-scale models - compute efficiency, sample complexity, and the operational challenges of running RL at scale alongside supervised pretraining. Reflection's stated objective centers on keeping superintelligence open and accessible through open foundation models. For inference practitioners, this implies potential work on model architectures, training infrastructure, and deployment systems designed for broad distribution rather than proprietary deployment. The autonomous coding focus suggests evaluation infrastructure for code generation, likely including metrics beyond pass@k - compilation rates, execution correctness, and performance characteristics of generated code under real-world constraints.
Braintrust
Braintrust builds an AI observability platform for measuring, evaluating, and improving AI systems in production. The platform integrates LLM evaluation into standard engineering workflows, serving companies including Notion, Stripe, Zapier, Vercel, and Ramp. The system enables teams to iterate on AI applications through real-time data pipelines that convert production data into evaluation feedback, with interfaces designed for both engineering iteration and product prototyping. The technical architecture centers on evaluation tooling that supports daily feature deployment cadence. The platform provides UI-based prototyping for non-engineers and real-time review workflows for cross-functional teams. Core infrastructure runs on Go, Python, and Node.js, with Postgres and Redis for data persistence and caching, deployed on AWS via Terraform and Docker. The team operates as a small group focused on developer tooling problems: building data pipelines for production AI systems, creating evaluation interfaces for LLM performance measurement, and developing workflows that reduce latency in feedback loops. Technical domains span AI development, model evaluation frameworks, real-time data infrastructure, and engineering workflow optimization.
Relevance AI
Relevance AI operates a no-code platform for building and orchestrating teams of agentic AI to automate tasks at scale. Founded in 2020, the company addresses the operational bottleneck of deploying AI agents across organizations by abstracting the complexity of agent creation and coordination. The platform saw 40,000 agents created in January 2025 alone - a 40x year-over-year increase in agent creation velocity - and supports thousands of subject-matter experts across fast-growing scaleups and Fortune 500 companies, including Activision and SafetyCulture. The architecture centers on agent orchestration and workforce management primitives that allow non-technical users to instantiate and coordinate agent teams without writing code. This presents a trade-off: accessibility and deployment speed against the control and customization available in code-first frameworks. The platform's value proposition hinges on reducing time-to-deployment for agent-based automation workflows, particularly for organizations constrained by engineering bandwidth or lacking deep ML expertise. The company operates from Australia and serves customers across gaming, enterprise software, and workplace safety verticals. The 40x growth in agent creation suggests either expanding adoption within existing customers or rapid customer acquisition, though the operational complexity of maintaining reliability and cost predictability at this scale - particularly around LLM API costs, latency in multi-agent workflows, and failure mode handling - remains a central engineering challenge for any orchestration platform at production scale.