Meta is trimming roughly 600 AI roles across legacy teams, just four months after CEO Mark Zuckerberg launched one of Silicon Valley’s most aggressive hiring sprees. Axios called it “a multibillion-dollar talent raid” in which the company offered eye-popping compensation packages to lure top researchers from OpenAI, Apple and Google. Wired magazine had noted that Mark Zuckerberg had offered top talent pay packages worth $100 million for a single year, and, in some cases, $300 million over four years.
Chief AI Officer Alexandr Wang, who joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, announced the cuts in an internal memo on October 22. “By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang wrote. Affected employees were told their termination date is November 21 and placed on a “non-working notice period” with internal access immediately revoked. They’ll receive 16 weeks of severance plus two additional weeks per year of service.
Inside Meta, the AI unit was considered “bloated,” with FAIR and product-oriented teams competing for computing resources, according to people familiar with the matter who spoke to CNBC. When Wang’s team from Scale AI joined to create Superintelligence Labs, they inherited an oversized organization. Following the cuts, Meta’s Superintelligence Labs workforce sits at just under 3,000 employees. At least 318 of the layoffs hit Meta’s Menlo Park headquarters, according to a WARN notice filed with California.
The cuts hit Meta’s Fundamental AI Research (FAIR) group, the lab founded in 2013 by Yann LeCun, Meta’s Chief AI Scientist and Turing Award winner, along with product AI and infrastructure teams, according to Axios. The company’s newer TBD Lab, tasked with training next-generation foundation models, remains protected and is still hiring.
Some insiders have described FAIR as “dying a slow death,” though LeCun has pushed back earlier this year, calling it “a new beginning” focused on long-term “advanced machine intelligence” research. The reality: FAIR, which LeCun founded in December 2013 and which established Meta as a serious AI player through breakthrough work in computer vision and deep learning, is being scaled back significantly just as Meta doubles down on product-focused AI teams.
The aggressive spending hasn’t translated to model dominance. In LMSYS Arena‘s widely watched benchmarks, where models compete head-to-head in blind tests, Meta’s Llama 4 models rank in the 80s and 90s for general text performance, trailing Chinese open-source models like Alibaba’s Qwen3 and DeepSeek’s R1, as well as several OpenAI, Google Gemini and Anthropic Claude series models. The coding picture is even bleaker: in WebDev Arena benchmarks, Llama 4 Maverick ranks 40th and Llama 3.1-405B sits at 52nd, far behind startups and rivals Meta is spending billions to beat.
Meta announced its Llama 4 series on Saturday, April 5, 2025. None of the models have carved out durable competitive spots in widely followed leaderboards. An “experimental” Llama-4-Maverick build briefly hit No. 2 on LMSYS’s Chatbot Arena (ELO 1,417). The Arena team and The Register later noted that this experimental submission produced unusually long, emoji-peppered answers and differed from the public relea/se; Meta confirmed it was a customized chat variant. When users tried the public models, performance was mixed. Simon Willison documented in Ars that a 20,000-token run with Scout via OpenRouter devolved into repetitive “junk” output. Meta’s VP of GenAI denied claims that test sets were used in training amid online speculation about benchmark contamination.
Meta and third-party listings touted up to a 10-million-token context for Scout, but practical limits reported by developers were far lower in early tests.
Meta’s old motto was “Move Fast and Break Things.” Now, it seems to be more: Move fast and refocus.



