
[Generated via OpenAI’s latest model on July 1, 2025]
That heavily modified nod to Field of Dreams seems to encapsulate the modus operandi of more than a few AI firms lately. And while few people can agree on what AGI, let alone “agent” means, it is clearer that the money is flowing. Just as Elon Musk’s xAI finalizes a $10 billion fundraising round, split between $5 billion in debt and $5 billion in strategic equity, the AI industry finds itself at an inflection point. A year after an Goldman Sachs analyst questioned if the AI spending spree would ever yield a return, the industry’s response has been to pour gasoline on the fire. The calculus is getting fuzzier, the price tags more astronomical, and the strategies more desperate.
Patents aren’t always a firm innovation metric
While Meta and others wage their talent wars with nine-figure packages, patent filings point to a different story about who’s actually filing AI patents, and having them granted. Trade secret approaches remain common in the fast-moving world of frontier AI, and the most innovative companies don’t necessarily have the biggest patent payloads. Patents below (international) from 2020 to 2025 (YTD and subject to a reporting lag).
Company | AI patents | Core AI % | Avg. citations |
---|---|---|---|
Microsoft | 5,455 | 40.8% | 16.4 |
4,579 | 36.1% | 18.3 | |
Meta | 1,967 | 23.6% | 26.9 |
OpenAI | 15 | 33.3% | 10.5 |
Scale AI | 14 | 14.3% | 11.4 |
Key findings
- The old guard leads: Traditional tech giants hold 10x more AI patents than headline-grabbing startups
- Quality vs. quantity: Apple averages 130 citations per patent vs. Meta’s 27, despite fewer total patents and allegations that it has fallen behind in the AI innovation race.
Methodology: Analysis of USPTO and international patent filings from 2020-2025 using AI-specific CPC codes (G06N for neural networks, G06F40 for NLP, G06V for computer vision). Patent families deduplicated to avoid counting the same invention across jurisdictions.
Source: Google Patents Public Dataset via BigQuery
Nowhere is this paradox more visible than at Meta. While xAI’s funding was oversubscribed by investors betting on a “build it and they will come” narrative, Mark Zuckerberg is engaged in a frantic, brute-force campaign to buy his way back to the top, creating a fabulously compensated, and likely unstable, AI dream team. The industry seems trapped in a cycle of reactive, inefficient spending, doubling down even as mounting evidence suggests they might be doing it all wrong.
The DeepSeek shock that changed little
When Chinese startup DeepSeek achieved near-parity with OpenAI’s o1 reasoning model, on a relatively sparse budget, in early 2025, it first seemed like the industry’s “emperor has no clothes” moment. But DeepSeek likely spent more than it first seemed. While DeepSeek claimed $5.6 million training costs, independent analysis shows total development costs of $1.3-1.6 billion including infrastructure. And even if DeepSeek did train a powerful model on the cheap, it could hint that capital efficiency and novel architectures could challenge brute-force computation. NVIDIA’s stock took a nosedive, falling $593 billion in a single day, marking a record one-day loss.
Strangely, the R1 moment appears to be contributing to more AI spending. AI demand, it seems, is elastic. And if it were true that DeepSeek made AI considerably less expensive to run, the prospect of less expensive genAI made it seem like we would just have genAI show up in more and more places over time.
Even before DeepSeek, Big Tech firms like Meta, Amazon, Alphabet and Microsoft intend to collectively spend hundreds of billions of dollars on AI technologies and datacenter buildouts in 2025. (Amazon leading with $100 billion in overall capex with the majority going to AI/data centers, followed by Microsoft at $80 billion, Google at $75 billion and Meta at $60-65 billion.)
By the middle of the year, Meta appears to have launched the most aggressive talent acquisition campaign in history, offering researchers compensation packages reaching a cumulative $100 million across multiple years (primarily via restricted stock units). Lucas Beyer, for one, has explicitly denied rumors that he would receive a $100 million sign-on bonus.
Meta’s highly-compensated superintelligence labs
In June 2025, Mark Zuckerberg unveiled Meta Superintelligence Labs (MSL) with little subtlety of a tech billionaire who just watched his stock price come close to hitting an all-time high. Co-leading this new venture are Alexandr Wang, the 27-year-old former Scale AI CEO who built a $14 billion data labeling empire, and Nat Friedman, GitHub’s former chief who once sold his startup to Microsoft for $7.5 billion. The broader team managing what might be the most expensive collection of AI talent ever assembled under one roof.
The roster reads like a who’s who of recent AI breakthroughs. From OpenAI alone, Meta extracted a bevy of researchers, including Lucas Beyer, co-creator of the Vision Transformer (87,000 citations and counting), and Trapit Bansal, who helped birth OpenAI’s o1 reasoning model. Meta gutted OpenAI’s Zurich office, taking multimodal experts Alexander Kolesnikov, Xiaohua Zhai and Jiahui Yu in what one insider called a “coordinated raid.”
From Google, Meta nabbed Jack Rae, who led thinking and inference time scaling for Gemini and pioneered memory compression techniques that run 3,000x more efficiently than standard models. Pei Sun, another DeepMind veteran with over 10,000 citations in 3D perception. Even Anthropic, supposedly mission-driven and retention-focused, lost Joel Pobar, a former Meta engineer who spent 11 years there before his brief Anthropic stint. Meta also recruited Shengjia Zhao, GPT-4’s reward model lead from OpenAI.
And then there is Johan Schalkwyk from Sesame AI, who also was a pioneer behind Google Voice Search. His current work on sub-200ms latency speech models for wearables hints at Meta’s real play: building AI for smart glasses and future AR devices. Paired with Ji Lin from MIT, whose model compression techniques have been downloaded 19 million times.
Rounding out the team are former-OpenAI’s multimodal specialists Huiwen Chang (who achieved 64x speedups in image generation), Jiahui Yu (Imagen developer), and reasoning experts Hongyu Ren and Shuchao Bi, the latter of whom created YouTube Shorts’ recommendation algorithm before joining OpenAI’s post-training team.
The price tag for this assembled expertise? Conservative estimates put it north of $500 million in compensation alone over four years.
This unprecedented spending on human capital raises a critical question: does it translate into measurable innovation? An analysis of R&D expenditure versus patent filings suggests that pouring money into talent doesn’t guarantee a proportional return on innovation… at least when using patents as a proxy, which is admittedly an imperfect metric.
Rank | Company | R&D (USD) | AI Patent Families | AI Families per $1B | Cost per Family |
---|---|---|---|---|---|
1 | Microsoft | $30.2B | 5,455 | 180.5 | $5.5M |
2 | NVIDIA | $8.9B | 1,480 | 166.6 | $6.0M |
3 | Apple | $30.6B | 2,771 | 90.4 | $11.1M |
4 | Meta | $37.4B | 1,967 | 52.6 | $19.0M |
5 | Tesla | $4.1B | 58 | 14.3 | $70.1M |
The talent war’s ‘mercenary’ problem
OpenAI CEO Sam Altman has famously argued that in the long run, “missionaries will beat mercenaries.” Yet Meta’s 2025 hiring blitz suggests the mercenaries are winning at the moment. In the aforementioned June campaign, Meta poached nearly one dozen top researchers from OpenAI alone, including key minds behind GPT-4 and its reasoning-focused successors like, o1, which had led to o3. The company also nabbed Google DeepMind’s Jack Rae, pre-training lead for Gemini and Pei Sun, a core contributor to Gemini’s reasoning capabilities.
These aren’t just rank-and-file engineers and jumping ship for compensation packages that eclipse even some of Meta’s own executive pay. While Meta refutes claims of nine-figure signing bonuses, it confirms that total compensation for senior AI leaders can reach $100 million over four years, with annual pay for top researchers exceeding $10 million.
The strategy is creating a palpable sense of decay. OpenAI’s Chief Research Officer Mark Chen described Meta’s recruiting tactics as feeling like “someone has broken into our home.” But despite Meta offering the highest pay, its prior AI talent retention rate was just 64%, trailing behind Google DeepMind (78%) and mission-focused Anthropic (80%). The exodus also underscores the fundamental truth that the clearest thing about the AI boom is the sort of mentality that throwing money at AI still a good idea. In other words, if you spend it, they will come. But they might not stay if a better offer comes along.