
Aerial view of OpenAI’s Stargate I data center under construction in Abilene, Texas, where the company is already running early AI training workloads. Credit: OpenAI
For years, rumors have swirled about OpenAI’s plans to build a massive data center project codenamed Stargate. Those plans were formalized in a January 2025 White House event in which OpenAI CEO Sam Altman, flanked by Oracle and SoftBank top brass, described plans to bring those dreams to fruition.
But in the months since, conflicting reports have emerged about everything from funding sources to construction timelines, leaving industry watchers to wonder whether Stargate represents the future of AI infrastructure or Silicon Valley’s latest fever dream. Today’s announcement of a 4.5 gigawatt partnership with Oracle, already a Stargate partner, suggests the reality lies somewhere in between. That is, the momentum is real, but so are the challenges of building infrastructure with space for as many as 400,000 of Nvidia Corp.’s latest AI chips.
Early wins clash with infrastructure reality
The hazy promise of Stargate mirrors the broader AI infrastructure race. Even as construction of Stargate I in Abilene is progressing and parts of the facility are now up and running, questions persist about the actual funding behind the $500 billion commitment. Elon Musk, for one, has publicly questioned whether SoftBank had the cash to back its promises. Simultaneously, Musk’s xAI has raised a combined $10 billion in debt and equity and is reportedly in talks to raise $20 billion in fresh funding, potentially valuing the AI and social media combo at over $120 billion, a funding spree that would make it the second-largest startup funding round ever, behind only OpenAI’s $40 billion raise. While ChatGPT boasts 400 million weekly active users and 1.5 billion monthly active users, Grok has surged from 25.82 million monthly visits in February after its Grok 3 release to potentially passing 100 million users. In other words, still far behind OpenAI but gaining ground rapidly, even as xAI has faced controversy over its at-times anarchic AI models’ leanings.
The careful language in today’s announcement hints at the challenges: OpenAI speaks of capacity “under development” rather than operational, and notes they “now expect to exceed” their initial commitment. That phrasing that signals either genuine progress or goalpost-moving. Meanwhile, rivals are racing to match or exceed OpenAI’s ambitions: Meta plans to invest US$60–65bn in AI infrastructure during 2025, including a new 4-million-square-foot data center in Louisiana that Zuckerberg says will eventually scale to 5 gigawatts and be “large enough to cover most of Manhattan.” The company is dubbing the forthcoming facility “Hyperpion.”
Stargate is gradually finding its footing
OpenAI’s latest announcement provides a window into what’s actually happening on the ground versus what’s being promised in earlier rumors. The 4.5 gigawatt partnership with Oracle represents progress: together with Stargate I in Abilene, OpenAI now has over 5 gigawatts of capacity under development, enough to run over 2 million chips.
Oracle began delivering Nvidia GB200 racks last month, and OpenAI reports it’s already running “early training and inference workloads” at the Abilene site. “The Stargate I site has already created thousands of jobs, with more expected as operations expand, including specialized roles for electricians, equipment operators, and technicians hailing from more than 20 states,” OpenAI added. This early activity highlights tangible steps forward. In the long run, the company estimates the additional 4.5 GW capacity will create over 100,000 jobs across construction and operations, though such projections often prove optimistic. The announcement also confirmed that “Microsoft will continue to provide cloud services for OpenAI, including through Stargate.”
he multipartner approach signals OpenAI’s attempt to avoid past capacity constraints, but it also exposes the gap between Silicon Valley’s AI ambitions and America’s creaking infrastructure. Data centers already consume 4.4% of U.S. electricity, a figure that could triple by 2028 as AI drives demand to as much as 580 terawatt-hours annually. In regions like California’s Santa Clara County, where data centers already consume 60% of local power, or the Pacific Northwest, where new AI facilities will need electricity equivalent to 3–5 million homes, the grid is approaching breaking point. Federal projections show blackout risks could increase 100-fold by 2030, from single-digit hours annually to over 800 hours per year if new firm power generation doesn’t materialize. The fact that OpenAI is touting capacity “under development” rather than operational, and that even the partially running Abilene site required Tesla Megapacks to handle power demands, suggests these moonshots face a more fundamental constraint than chips or capital. In the end, the AI infrastructure race may be won not by whoever announces the biggest number, but by whoever can literally keep the lights on.



