Magic operates 8,000 NVIDIA H100s on Google Cloud, training frontier code models designed to automate software engineering and AI research itself. The company has raised $515 million from Nat Friedman, Daniel Gross, CapitalG, and Sequoia to pursue direct AGI development through code generation - treating automated AI research as the primary bottleneck rather than incremental developer tooling. Technical focus spans large-scale pre-training, domain-specific reinforcement learning, ultra-long context windows, and inference-time compute scaling.
The company's research program centers on fundamental problems in automating software engineering at scale, not incremental productivity improvements. Context window extension and inference-time compute are treated as first-class constraints rather than auxiliary features. Co-founded by Eric Steinberger and Sebastian De Ro, the team remains small and emphasizes ownership over execution - engineers and researchers work on meaningful problem subsets rather than predetermined roadmaps.
Infrastructure operates at production scale: the H100 cluster represents committed capital toward training runs that matter, not research prototypes. The operational model assumes that code generation quality and AI research automation are the direct path to AGI, making software engineering the domain where model capabilities and safety research converge. Google Cloud provides the substrate, but the company owns its GPU allocation outright.