Here’s the scoop: AI is still on fire, despite bubble talk, and OpenAI’s rumored $40B capital raise—driving its valuation up to $340B—could be one of the biggest bets yet. But this isn’t happening in a vacuum. Competition from China’s DeepSeek, fresh AI model launches by Alibaba, and rumored next-gen offerings from Google and others mean the race is far from settled. Below are the top highlights you need to know—from who’s investing billions, to how “distilled” open-source models may undercut the most expensive AI projects. Buckle up.
1. OpenAI seeking $40B in fresh capital
Source: WSJ

[Adobe Stock]
Why it matters: This funding would likely cement OpenAI’s position in the global AI infrastructure race, helping it realize its $100B+ “Stargate” supercomputer ambitions while also inspiring fast followers. The valuation surge ($157B to $340B in 15 months) signals investor confidence in centralized, capital-intensive AI development amidst questions that the AI market is “in a bubble.” Yet Nvidia’s recent $600B selloff reflects continued market uncertainty.
2. DeepSeek launches Janus-Pro, an open-source multimodal AI
Chinese startup DeepSeek has been on a roll lately. It recently released Janus-Pro, an MIT-licensed multimodal AI that reportedly rivals both DALL-E 3 and Stable Diffusion in text-to-image synthesis. Building on a dual-encoder approach—SigLIP for understanding and VQ for generation—Janus-Pro claims stronger performance than prior open-source models in creative and interpretive tasks. Although its image outputs max out at 384×384 pixels, Janus-Pro’s open-source nature and free distribution terms promise to accelerate experimentation and reduce costs for developers exploring advanced image-generation capabilities.
Why it matters: Janus-Pro’s launch undercuts proprietary AI providers by offering an alternative at minimal cost. Its performance validates DeepSeek’s rapid-fire open-source strategy.
3. Alibaba’s launches model it says bests DeepSeek V3
Source: Reuters
This week, a meme circulated on Reddit with a man whispering into OpenAI CEO Sam Altman’s ear with the caption: “Sir, the Chinese just launched a new AI model.” That new model was Qwen 2.5-Max release during Lunar New Year, claiming superiority over DeepSeek’s V3 model, on which R1 was built. The model is no slouch. “Qwen 2.5-Max outperforms … almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba’s cloud group wrote in a statement. The model was pre-trained on more than 20 trillion tokens and further refined through curated Supervised Fine-Tuning. In contrast, DeepSeek-R1-Zero did not use Supervised Fine-Tuning at all. Instead, it relying entirely on reinforcement learning (RL).
Why it matters: Alibaba’s claim of outperforming GPT-4o and DeepSeek-V3 adds more ammunition to the argument that China can match Western AI development despite export controls on advanced GPUs.
4. U.S. tech leaders divided on the path forward for AI
Source: Various

[Adobe Stock]
Why it matters: The Silicon Valley tensions represent a fork in the road. Should the U.S. prioritize security or collaboration? Or both. Microsoft’s dual approach—investigating DeepSeek while hosting its models on Azure—could point to a continued balancing act between the two poles, while also acknowledging open source AI may only continue to accelerate.
5. OpenAI partners with U.S. National Labs
Source: OpenAI Blog

Brittany Humphrey holds a microneedle at Sandia National Laboratories. As part of a Cooperative Research and Development Agreement, Sandia has modified the shape of microneedles to speed up the extraction of interstitial fluid.
Credits
(Photo by Craig Fritz)
OpenAI inked an agreement to bring its latest “o-series” models to top U.S. National Laboratories (including Los Alamos, Sandia, and Livermore), supporting some 15,000 scientists in areas spanning materials science, renewable energy, nuclear security, and beyond. The o-series models for the labs will be hosted on NVIDIA’s Venado supercomputer at Los Alamos, providing researchers with a direct pipe to frontier models. The National Labs collectively receive over $16B annually in research funding.
Why it matters: This partnership shores up the labs’ AI toolkit for high-impact research—ranging from cybersecurity to potential next-gen energy breakthroughs—and signals a deeper federal stake in frontier AI. The partnership creates a testbed for AI-augmented science at scale, with spillover potential for federally funded academic consortia (e.g., NSF AI Institutes) and IP cross-pollination—OpenAI gains domain-specific feedback to harden models for STEM workflows, while labs gain early-mover advantage in advanced LLM systems tuned for science and math.
6. Google quietly announces its next flagship AI model
Source: TechCrunch
Google took a low-profile approach to unveil Gemini 2.0 Pro Experimental—an upgrade from Gemini 1.5 Pro. TechCrunch later updated that after the model appeared in a changelog, the company removed a mention to it. Before it disappeared, the changelog noted: “Whether you’re tackling advanced coding challenges, like generating a specific program from scratch, or solving mathematical problems, like developing complex statistical models or quantum algorithms, 2.0 Pro Experimental will help you navigate even the most complex tasks with greater ease and accuracy.” While the model is gone for now, it will likely appear before long.
Why it matters: The model’s improved coding/math capabilities could potentially streamline complex technical workflows (e.g., quantum algorithm design, statistical modeling) for developers and researchers in enterprise/academic settings. The competition could play in roll inspiring further STEM-related improvements from rivals, who are already forging ahead in the area — most notable OpenAI with its o3 series.
7. DeepSeek’s WASM 2× speed boost + distillation economics

A diagram of classic distillation rather than the AI sort. [Adobe Stock]
Still, the result is impressive. The pull request offers a near 2× performance increase on certain quantized llama.cpp models running under WebAssembly (WASM). On platforms where WASM’s portability is critical—such as in-browser inference for prototyping and real-time demos—this can be a major win. According to community benchmarks, the faster inference time means smaller, quantized models can power dynamic tasks (e.g., summarization or classification) locally and more efficiently, without requiring specialized hardware.
The development was also a sort of microcosm of what appears to be a growing trend — that DeepSeek R1 — could point to a future in which relatively frugal models offer significant performance. The Wall Street Journal explained: “Why ‘Distillation’ Has Become the Scariest Word for AI Companies.” IN essence, it explains that DeepSeek has tapped this technique to “borrow” capabilities from massive AI models, including those owned by major players like OpenAI, at a fraction of the cost. The WSJ notes that this approach is raising eyebrows among industry leaders who have poured billions into training the largest, most capable models, only to see smaller, cheaper rivals learn from their systems in a matter of days or weeks.
Why it matters: The convergence of faster WASM-based inference and cheap distillation techniques could slash the runtime of quantized AI models in fields like molecular dynamics, structural biology, and climate simulation—domains that have traditionally required expensive high-performance clusters.

[Intel]
8. Intel shelves ‘Falcon Shores’ AI chip
Source: TechCrunch
Intel won’t bring its high-performance “Falcon Shores” GPU to market, turning it into an internal test chip instead. The company’s pivot highlights how tough the high-stakes AI hardware race has become—especially as Nvidia and AMD seize the spotlight. Falcon Shores was originally aimed at AI/HPC markets worth an estimated $40B+; Intel is refocusing on its “Jaguar Shores” concept instead.
Why it matters: For R&D teams, fewer new GPU suppliers could complicate upcoming HPC projects. Intel’s move may steer labs toward rival chips or drive them to investigate open-source solutions as AI model sizes and demand surge.
Tell Us What You Think!
You must be logged in to post a comment.