Amazon recently doubled down on its Anthropic investment with an additional $4 billion, while NVIDIA reported another quarter of remarkable growth with sales hitting $35.1 billion. Meanwhile, a debate emerged at the Cerebral Valley AI Summit about whether AI development is hitting a scaling wall, with industry leaders like Anthropic’s Dario Amodei and Scale AI’s Alexandr Wang taking opposing positions. In biotech, the Human Cell Atlas Consortium reported major progress in mapping human cells, while Converge Bio secured $5.5 million to build what it calls the “everything store for biotech LLMs.”
1. Amazon doubles investment: Additional $4B for AI firm Anthropic
Source: Wall Street Journal
Amazon has announced a significant expansion of its investment in AI startup Anthropic, contributing an additional $4 billion to double its total investment to $8 billion. The tech giant maintains its minority ownership stake in the San Francisco-based AI safety and research company, which develops the Claude AI assistant to compete with ChatGPT.“We’ve been impressed by Anthropic’s pace of innovation and commitment to responsible development of generative AI,” said Matt Garman, chief executive of Amazon Web Services, highlighting the strategic importance of the partnership.
The investment follows Anthropic’s commitment “to spend $4 billion on Amazon’s cloud platform over the next five years,” as reported by the Wall Street Journal. Founded in 2021 by siblings and former OpenAI employees Dario and Daniela Amodei, Anthropic differentiates itself by claiming “its technology is safer and more reliable than that of other AI companies.”
This move comes amid increasing competition in the AI sector, with Google previously agreeing to invest up to $2 billion in Anthropic and Elon Musk’s xAI recently raising $5 billion at a $50 billion valuation. According to research firm PitchBook, the generative AI sector attracted nearly $30 billion in investments last year.
While the Federal Trade Commission is investigating AI investments by major tech companies, including Amazon’s relationship with Anthropic, UK antitrust officials have already cleared Amazon’s previous investment in September, finding no competitive concerns.
Anthropic plans to use the new funding to advance its machine learning hardware development and enhance Claude, its AI assistant that competes with OpenAI’s ChatGPT.
2. NVIDIA still sees surging AI chip demand, stock mostly flat
Source: Wall Street Journal
NVIDIA has announced another quarter of remarkable growth, with sales reaching $35.1 billion, up 94% from a year prior, and profits more than doubling to $19.3 billion. The AI chip giant also projected around $37.5 billion in revenue for its current quarter, exceeding analyst expectations.“Demand for the company’s current generation of chips and for Blackwell was ‘incredible’ as leading AI developers scale up their computing infrastructure,” CEO Jensen Huang told analysts. The company expects its next-generation Blackwell chips to ship in the current quarter, though they will likely remain in short supply into the next fiscal year.
Despite these strong results, Nvidia’s shares fell about 2.5% in after-hours trading, as the results fell short of some investors’ high expectations following several quarters of dramatic growth. The company faces growing challenges, including tightened U.S. restrictions on shipments to China and increasing competition from rivals like Advanced Micro Devices and AI chip startups, the Journal noted. Chief Financial Officer Colette Kress noted that while revenue from China had increased from the preceding quarter, it remained “well below levels logged before the export restrictions went in place.”
3. LLaVA-o1: New open-source visual language model
Source: LinkedIn
In a LinkedIn post, Runa AI founder and former DeepMind research engineer Aleksa Gordić announced LLaVA-o1, a new open-source model fine-tuned on Llama-3.2-11B-Vision-Instruct. The model achieves an 8.9% improvement on multimodal reasoning benchmarks compared to closed-source alternatives like Gemini Pro 1.5 and GPT-4o-mini, using just 100,000 training samples. The developers of the model implemented a prompting strategy that structures GPT-4o outputs into distinct components: summary, caption, reasoning, and conclusion. “My take: neat idea generating a multi modal dataset like this!” Gordić wrote. “Having said that I doubt that fine-tuning on 100k samples can create VLMs that are better at multimodal tasks than closed-source models, and we’re likely just seeing benchmark hacking at play here.”
The Git repo for the model notes that users of the model must comply with OpenAI’s Terms of Use for the dataset and specific licenses for base Llama models since LLaVA-o1 uses GPT-4 outputs for training data generation and Llama-3.2 as its base model.
4. Human Cell Atlas Consortium advances toward first draft
Source: Nature
The Human Cell Atlas (HCA) consortium has announced significant progress toward creating the first comprehensive map of all human cells, marking a major milestone in understanding human biology. Since its founding in 2016, the consortium has grown to include more than 3,600 members across 102 countries.“The HCA data portal currently hosts data from approximately 62 million cells collected from around 9,100 donors,” reports Nature. The consortium is constructing 18 HCA Biological Network Atlases, with each network consolidating available data related to individual tissues or organs.
5. Thoughts on how AI/ML tools are transforming the preclinical drug discovery landscape
Marina T Alamanou, Ph.D. highlights in a Substack post how AI/ML tools are transforming preclinical drug discovery, with several key companies leading the transformation:
BenchSci, with support from Google’s AI fund Gradient Ventures, is tackling the data reproducibility crisis with its ASCEND platform. “ASCEND is the industry standard for antibody selection and over 50,000 scientists in 16 of the top 20 pharmaceutical companies and more than 4,500 academic institutions use ASCEND,” reports Alamanou. She notes that it can save millions per year in hard costs alone related to antibodies, which up to half of reagent failures.
Bruker Cellular Analysis is another major player following strategic acquisitions, offering advanced single-cell technologies. The company’s Optofluidic technology “accelerates product development by analyzing the phenotype and genotype of single cells or clones and generating insights by rapidly screening 1000s of cells at once,” Alamanou wrote.
She also covers Flagship Pioneering-founded Cellarity and other players.
6. Converge Bio raises $5.5 million to build ‘everything store’ for biotech LLMs
Source: LinkedIn/Tech Crunch
Converge Bio CEO Dov Gertz notes that the company has secured $5.5 million in funding to develop what Gertz calls “the everything store for GenAI in biotech.” The round was led by TLV Partners, positioning the company to scale its platform for making biology-focused large language models (LLMs) more practical and explainable for research teams.“A model is just a model. It’s not enough,” Gertz explained to TechCrunch. “A pipeline has to be made so companies can actually use the model in their own R&D process. The market is very fragmented, but pharma and biotech want to consume this technology in a consolidated way, in one place.”
7. NVIDIA reveals growing BioNeMo traction
Source: LinkedIn
In a LinkedIn post, David Ruau, business strategic alliances, Drug Discovery AI, EMEA at NVIDIA, announced growing adoption from pharmaceutical companies, techbio researchers, and AI platforms including Argonne National Laboratory, Genentech, Ginkgo Bioworks, and others for its BioNeMo framework, an opensource collection of programming tools, libraries, and models for drug discovery.
8. OML 1.0 supports AI model ownership verification
Source: LinkedIn
Sentient Foundation has released OML 1.0, a library that enables proof of AI model ownership. “This OML 1.0 library enables fingerprinting AI models, making it possible to prove who owns an open AI model. Here is to AI loyalty,” the company wrote on LinkedIn. The code is on GitHub.
9. Multi-agent AI systems smooth banking customer onboarding
Source: LinkedIn
Patrick Rotzetter, Program Lead at Amazon Web Services (AWS), noted on LinkedIn that customer onboarding is “one of the most complex and resource-intensive processes in banking today.” So he decided to do something about it, drafting a proposal for “a multi-agent architecture that transforms banking customer onboarding.”
The proposed system orchestrates five specialized AI agents working in concert:
- A Coordinator Agent managing the overall KYC process
- A Regulatory Agent focused on AML/KYC compliance
- A Document Agent handling document collection
- A Validation Agent for cross-referencing documentation
- A Review Agent ensuring final compliance
“This isn’t just automation – it’s intelligent orchestration,” Rotzetter explained. “We’re not replacing human judgment; we’re empowering compliance officers with precision tools and speed.”
10. Scaling law debate grows in prominence
Industry leaders at the Cerebral Valley AI Summit in San Francisco have raised concerns about hitting a “wall” in AI progress. Over the past few years, genAI developers have exploited progressively larger amounts of data with more sophisticated GPUs and compute architectures to yield exponential progress in AI capabilities. The AI performance gains could be becoming more incremental, according to a growing number of AI industry observers, but not everyone is convinced that progress is slowing.The Cerebral Valley gathering of approximately 350 CEOs, engineers, and investors highlighted a growing recognition that Google and other major players are experiencing diminishing returns from training their next-generation AI models. This development questions the long-held belief that future AI models will automatically be dramatically more capable than current ones.
Scale AI CEO Alexandr Wang acknowledged hitting certain boundaries: “It seems to be the case that we’ve hit a wall on pre-training,” he said at the summit. “So the large-cluster training on huge amounts of internet data, that seems to have genuinely hit a wall, but we haven’t hit a wall on progress in AI.”
Newcomer noted that Anthropic CEO Dario Amodei strongly disagreed with the notion of an overall slowdown. “I was among the first to document the scaling laws and the scaling of AI. Nothing I’ve seen in the field is out of character with what I’ve seen over the last 10 years, or leads me to expect that things will slow down,” he stated, adding that “I don’t think there’s any barrier…as a general matter, we’ll see better models every some number of months.”
The debate comes as companies like Google reportedly see diminishing returns from training their next-generation AI models.
Databricks CEO Ali Ghodsi suggested that cost considerations alone make the “bigger is better” approach to large language models increasingly impractical, regardless of technical limitations.
11. Study reveals computing gap between academic and industry AI research
Source: Nature
A new survey has highlighted a notable disparity between academic and industry researchers’ access to the computing power needed for AI research. The study found that 66% of academic researchers rated their satisfaction with computing resources as 3 or less out of 5, with some reporting days-long waits for GPU access. “While those industry giants might have thousands of GPUs, academics maybe only have a few,” said study co-author Apoorv Khandelwal from Brown University.Only 10% of surveyed academics reported access to NVIDIA’s H100 GPUs limiting their ability to pre-train large language models. “It’s so expensive that most academics don’t even consider doing science on pre-training,” Khandelwal noted. Co-author Ellie Pavlick emphasized the importance of maintaining competitive academic research, stating “When you have industry research, there’s clear commercial pressure and this incentivizes sometimes exploiting sooner and exploring less.”
12. OpenAI funds research into AI morality prediction algorithms
Source: TechCrunch
OpenAI’s nonprofit arm has awarded a three-year, $1 million grant to Duke University researchers studying “making moral AI.” The project aims to train algorithms to predict human moral judgments in medical, legal, and business scenarios. “The goal of the OpenAI-funded work is to train algorithms to predict human moral judgements in scenarios involving conflicts among morally relevant features in medicine, law, and business,” according to the press release.The research, led by practical ethics professor Walter Sinnott-Armstrong and Jana Borg, builds on their previous work developing “morally-aligned” algorithms for applications like kidney donation allocation. Yet, as TechCrunch notes, training AI to understand nuanced moral concepts remains challenging: “AI doesn’t have an appreciation for ethical concepts, nor a grasp on the reasoning and emotion that play into moral decision-making,” the publication noted.
Research assistance: Frédéric Célerse, Ph.D., Research Scientist in AI for Chemistry, Ecole Polytechnique Fédérale de Lausanne.
Tell Us What You Think!