Research & Development World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE

This week in AI research: OpenAI eyes $340B valuation, partners with national labs

By Brian Buntz | January 30, 2025

Here’s the scoop: AI is still on fire, despite bubble talk, and OpenAI’s rumored $40B capital raise—driving its valuation up to $340B—could be one of the biggest bets yet. But this isn’t happening in a vacuum. Competition from China’s DeepSeek, fresh AI model launches by Alibaba, and rumored next-gen offerings from Google and others mean the race is far from settled. Below are the top highlights you need to know—from who’s investing billions to how “distilled” open-source models may undercut the most expensive AI projects. Buckle up.

1. OpenAI seeking $40B in fresh capital

Source: WSJ

OpenAI logo on black background. Chernihiv, Ukraine - January 15, 2022

[Adobe Stock]

OpenAI is negotiating a $40B SoftBank-led funding round at a $340B valuation (including fresh capital) to counter seismic disruption from China’s DeepSeek-R1—an open-source model rivaling GPT-4o at 95% lower cost. This triggered a $600B Nvidia selloff and pressured partners Oracle (-14%) and Dell (-9%), whose data-center ambitions hinge on the rise of AI data centers, including OpenAI’s $100B+ “Stargate” infrastructure project ($18B of this round’s proceeds). The deal would double OpenAI’s valuation since October 2023 ($157B), positioning it as the world’s #2 startup behind SpaceX. Why it matters: OpenAI’s $40B bid intensifies a high-stakes infrastructure arms race, with Musk’s xAI already scaling its Tennessee “Colossus” supercomputer from 100K to 1M GPUs with cutting-edge hardware. This capital surge locks in dependency on costly Western chips (despite DeepSeek’s $0.55/token frugality) while straining resources. Time will tell whether future AI winners will need either SoftBank-scale war chests or frugal workarounds.

Why it matters: This funding would likely cement OpenAI’s position in the global AI infrastructure race, helping it realize its $100B+ “Stargate” supercomputer ambitions while also inspiring fast followers. The valuation surge ($157B to $340B in 15 months) signals investor confidence in centralized, capital-intensive AI development amidst questions that the AI market is “in a bubble.” Yet Nvidia’s recent $600B selloff reflects continued market uncertainty.

2. DeepSeek launches Janus-Pro, an open-source multimodal AI

Source: GitHub, R&D World

Chinese startup DeepSeek has been on a roll lately. It recently released Janus-Pro, an MIT-licensed multimodal AI that reportedly rivals both DALL-E 3 and Stable Diffusion in text-to-image synthesis. Building on a dual-encoder approach—SigLIP for understanding and VQ for generation—Janus-Pro claims stronger performance than prior open-source models in creative and interpretive tasks. Although its image outputs max out at 384×384 pixels, Janus-Pro’s open-source nature and free distribution terms promise to accelerate experimentation and reduce costs for developers exploring advanced image-generation capabilities.

Why it matters: Janus-Pro’s launch undercuts proprietary AI providers by offering an alternative at minimal cost. Its performance validates DeepSeek’s rapid-fire open-source strategy.

3. Alibaba’s launches model it says bests DeepSeek V3

Source: Reuters

This week, a meme circulated on Reddit with a man whispering into OpenAI CEO Sam Altman’s ear with the caption: “Sir, the Chinese just launched a new AI model.” That new model was Qwen 2.5-Max release during Lunar New Year, claiming superiority over DeepSeek’s V3 model, on which R1 was built. The model is no slouch. “Qwen 2.5-Max outperforms … almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba’s cloud group wrote in a statement. The model was pre-trained on more than 20 trillion tokens and further refined through curated Supervised Fine-Tuning. In contrast, DeepSeek-R1-Zero did not use Supervised Fine-Tuning at all. Instead, it relying entirely on reinforcement learning (RL).

Why it matters: Alibaba’s claim of outperforming GPT-4o and DeepSeek-V3 adds more ammunition to the argument that China can match Western AI development despite export controls on advanced GPUs.

4. U.S. tech leaders divided on the path forward for AI

Source: Various

[Adobe Stock]

Silicon Valley finds itself in something of a quandry as international competition in the AI field heats up. On a personal blog, San Francisco–based Anthropic CEO Dario Amodei insisted on strict export controls to maintain an AI edge. He stressed that export controls were “more existentially important” than mere weeks ago. Meta’s chief AI scientist Yann LeCun hailed the arrival of DeepSeek R1 as a sort of vindication for open source development. Marc Andreessen called the arrival of R1 “AI’s Sputnik moment” while Microsoft had something of a dual approach. It revealed that it was investigating DeepSeek’s alleged data misuse while adding R1 to Azure.

Why it matters: The Silicon Valley tensions represent a fork in the road. Should the U.S. prioritize security or collaboration? Or both. Microsoft’s dual approach—investigating DeepSeek while hosting its models on Azure—could point to a continued balancing act between the two poles, while also acknowledging open source AI may only continue to accelerate.

5. OpenAI partners with U.S. National Labs

Source: OpenAI Blog

Brittany Humphrey holds a microneedle at Sandia National Laboratories. As part of a Cooperative Research and Development Agreement, Sandia has modified the shape of microneedles to speed up the extraction of interstitial fluid.
Credits
(Photo by Craig Fritz)

OpenAI inked an agreement to bring its latest “o-series” models to top U.S. National Laboratories (including Los Alamos, Sandia, and Livermore), supporting some 15,000 scientists in areas spanning materials science, renewable energy, nuclear security, and beyond. The o-series models for the labs will be hosted on NVIDIA’s Venado supercomputer at Los Alamos, providing researchers with a direct pipe to frontier models. The National Labs collectively receive over $16B annually in research funding.

Why it matters: This partnership shores up the labs’ AI toolkit for high-impact research—ranging from cybersecurity to potential next-gen energy breakthroughs—and signals a deeper federal stake in frontier AI. The partnership creates a testbed for AI-augmented science at scale, with spillover potential for federally funded academic consortia (e.g., NSF AI Institutes) and IP cross-pollination—OpenAI gains domain-specific feedback to harden models for STEM workflows, while labs gain early-mover advantage in advanced LLM systems tuned for science and math.

6. Google quietly announces its next flagship AI model

Source: TechCrunch

Google took a low-profile approach to unveil Gemini 2.0 Pro Experimental—an upgrade from Gemini 1.5 Pro. TechCrunch later updated that after the model appeared in a changelog, the company removed a mention to it. Before it disappeared, the changelog noted:  “Whether you’re tackling advanced coding challenges, like generating a specific program from scratch, or solving mathematical problems, like developing complex statistical models or quantum algorithms, 2.0 Pro Experimental will help you navigate even the most complex tasks with greater ease and accuracy.” While the model is gone for now, it will likely appear before long.

Why it matters: The model’s improved coding/math capabilities could potentially streamline complex technical workflows (e.g., quantum algorithm design, statistical modeling) for developers and researchers in enterprise/academic settings. The competition could play in roll inspiring further STEM-related improvements from rivals, who are already forging ahead in the area — most notable OpenAI with its o3 series.

7. DeepSeek’s WASM 2× speed boost + distillation economics

Source: GitHub, Reddit, WSJ

Distillation vector illustration. Labeled physical substance separation process explanation scheme. Diagram with equipment for boiling and condenser flasks. Chemistry method graphic for clean liquid.

A diagram of classic distillation rather than the AI sort. [Adobe Stock]

A recent pull request to the llama.cpp project has stirred excitement—and a bit of confusion—across social media. “DeepSeek R1 just got a 2x speed boost. The crazy part? The code for the boost was written by R1 itself. Self-improving AI is here,” read the title of a popular Reddit post. One reader balked at the subject line: “Misleading title. The PR in question is https://github.com/ggerganov/llama.cpp/pull/11453 , and the performance improvement has nothing to do with DeepSeek except being written by DeepSeek.

Still, the result is impressive. The pull request offers a near 2× performance increase on certain quantized llama.cpp models running under WebAssembly (WASM). On platforms where WASM’s portability is critical—such as in-browser inference for prototyping and real-time demos—this can be a major win. According to community benchmarks, the faster inference time means smaller, quantized models can power dynamic tasks (e.g., summarization or classification) locally and more efficiently, without requiring specialized hardware.

The development was also a sort of microcosm of what appears to be a growing trend — that DeepSeek R1 — could point to a future in which relatively frugal models offer significant performance. The Wall Street Journal  explained: “Why ‘Distillation’ Has Become the Scariest Word for AI Companies.” IN essence, it explains that DeepSeek has tapped this technique to “borrow” capabilities from massive AI models, including those owned by major players like OpenAI, at a fraction of the cost. The WSJ notes that this approach is raising eyebrows among industry leaders who have poured billions into training the largest, most capable models, only to see smaller, cheaper rivals learn from their systems in a matter of days or weeks.

Why it matters: The convergence of faster WASM-based inference and cheap distillation techniques could slash the runtime of quantized AI models in fields like molecular dynamics, structural biology, and climate simulation—domains that have traditionally required expensive high-performance clusters.

[Intel]

8. Intel shelves ‘Falcon Shores’ AI chip

Source: TechCrunch

Intel won’t bring its high-performance “Falcon Shores” GPU to market, turning it into an internal test chip instead. The company’s pivot highlights how tough the high-stakes AI hardware race has become—especially as Nvidia and AMD seize the spotlight. Falcon Shores was originally aimed at AI/HPC markets worth an estimated $40B+; Intel is refocusing on its “Jaguar Shores” concept instead.

Why it matters: For R&D teams, fewer new GPU suppliers could complicate upcoming HPC projects. Intel’s move may steer labs toward rival chips or drive them to investigate open-source solutions as AI model sizes and demand surge.

Related Articles Read More >

H100 image from NVIDIA
After reportedly pursuing Shanghai R&D site, Nvidia calls U.S. GPU export controls a ‘failure’
NASA taps 100 million satellite images to train an open geospatial foundation model
Why Google DeepMind’s AlphaEvolve incremental math and server wins could signal future R&D payoffs
2025 R&D layoffs tracker tops 92,000
rd newsletter
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, trends, and strategies in Research & Development.
RD 25 Power Index

R&D World Digital Issues

Fall 2024 issue

Browse the most current issue of R&D World and back issues in an easy to use high quality format. Clip, share and download with the leading R&D magazine today.

Research & Development World
  • Subscribe to R&D World Magazine
  • Enews Sign Up
  • Contact Us
  • About Us
  • Drug Discovery & Development
  • Pharmaceutical Processing
  • Global Funding Forecast

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search R&D World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE