
Generated with FLUX model from BFL
Asked whether AI will lessen the need for researchers, Google’s head of Research Yossi Matias gave a clear answer. “The only scenario where you would need fewer researchers is if we assume we’ve answered almost all the major questions. I don’t think anyone believes that,” he said at Google’s flagship research conference in Mountain View. “We are only just beginning to understand the opportunity AI gives us to empower researchers.”
“This technology will create opportunities not just for more researchers, but for every researcher to ask bigger questions, accelerate their research agenda and achieve better results,” Matias said. “With an AI co-scientist, they can now tackle questions we previously couldn’t.” AI co-scientist is a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator, as the company explained earlier this year. For context, its Gemini 2.5 Pro model still ranks at or near the top of several prominent leaderboards, and Gemini 3.0 could launch relatively soon.
“I see AI as an amplifier of human ingenuity. It empowers scientists, healthcare workers, teachers and business people in their work and daily lives.”—Matias
Designing experiments is the new bottleneck
With AI-enabled tools making inroads in everything from lab notebooks to literature review, the constraint is beginning to shift from running experiments to choosing which ones to run, said Annalisa Pawlosky, a senior staff research scientist on Google’s AI co-scientist team. An AI can generate thousands of research ideas. A lab cannot test them all.
Yossi Matias: AI won’t replace researchers
At Google’s Research to Reality event in Mountain View last week, the head of Google Research laid out his vision for AI’s role in scientific discovery.
Key points from Matias:
- On the research cycle: “All of our research projects are motivated by problems in the real world. We solve the research problem, publish it for validation and peer review, then apply it back to real world applications. This generates the next questions.”
- On healthcare’s potential: Matias envisions a future where AI-enabled disease screening and analysis become routine: “There’s no reason why anybody should be surprised by the disease that is hitting them. With AI and having experts use that, we can actually get closer to prevention.”
- On empowering teachers: “With AI, there’s opportunity for more teachers to be more effective, to work effectively with more students. There’s no lack of opportunity to actually have the next generation be educated in a better way.”
- On quantum’s promise: Following the announcement of Google’s Quantum Echoes algorithm, the first verifiable quantum advantage on hardware that can probe molecular structures 13,000 times faster than classical methods: Just imagine that now we’re going to have the capability to create new insights into the world that can then be fed and amplified with AI. That’s going to open up new insights, new novelty, new innovation.”
- On what’s ahead: “We’re so early on in our ability to understand science, understand healthcare, to understand the world… There’s so much more work to do.”
From remarks at Google Research to Reality 2025, Mountain View, Calif.
To study amyotrophic lateral sclerosis (ALS), for example, researchers take patient cells and convert them to motor neurons. “This process of taking those special cells from patients and making them into motor neurons takes months, about three months,” Pawlosky said. “Then you have to age them or stress them to age them, and that takes another eight to twelve weeks.” Generating ideas is easy. Deciding which ones deserve patient cells and bench time is the work.
“AI co-scientist can give you thousands of suggestions, but the real challenge is filtering them,” she said. “How do we produce something that is useful to the researcher, that doesn’t add more effort and expense, but is also right? That’s the trifecta we’re trying to solve. It’s complicated and it’s hard.”
Acknowledging AI’s jagged competence
Even as AI capabilities advance rapidly, fundamental challenges remain. Katherine Chou, a vice president at Google Research, pointed out the remaining shortcomings of current AI models. “A significant challenge is that AI, when left to its own devices with unfiltered, real-world data, tends to converge on what we call ‘jagged competence,'” she said. “You may have experienced this yourself. Sometimes an AI will offer something profound, and other times it will be complete nonsense. For a field as critical as medicine, this inconsistency is unacceptable.”
Her remedy was to ground AI in the rigor of the scientific method: peer review, open data, independent verification and reproducible experiments. “We need AI systems that reliably succeed, gracefully fail, and continuously learn from scientific truths.” She added: “We need AI systems that reliably succeed, fail gracefully, and continuously learn from scientific truths.” Chou said. “This will result in a system that is truth-seeking and based on evidence, transparency, and reproducibility.”
Proof of concept: Thailand’s diabetic retinopathy deployment
Chou pointed to Google’s diabetic retinopathy work as an example of the scientific method in action, which dates back to 2016. Publication in JAMA led directly to collaboration with Thailand’s public health system. That experience “led us directly to Dr. Paisan Ruamviboonsuk, a top ophthalmologist from Rajavithi Hospital in Thailand,” Chou said. “He contacted us right after reading our paper.”
In Thailand, some regions have a significant shortage of eye care specialists. Thailand achieved about 1–1.5 ophthalmologists per 100,000 people nationally in earlier studies, but distribution has been uneven, with many provinces falling below 1 per 100,000. That contact led to a Google team landing in Bangkok in 2017 to meet with Ruamviboonsuk, representatives from the Thai Ministry of Public Health, and the Thai FDA. “He truly became the Maxwell to our Faraday, enabling us to turn our research into reality,” Chou said.
The team ran a major AI interventional study with approximately 8,000 patients. They also conducted studies on health economics and outcomes to gather critical evidence for regulators and insurers. The deployment grew to significant scale in Thailand’s national screening program. In India, Dr. Ramasamy Kim at Aravind Eye Care has used the same methodology for screenings, with Google’s AI model supporting more than 600,000 screenings globally to date.
Adoption in the United States remains limited, which Chou links to regulation, payment and workflow. “I still think we’re probably two to three years away from actually seeing huge impact,” she said.
Chou sees the Thailand deployment as proof of concept for a methodology that can scale. “What happened in Thailand is a good example of how the scientific method can create a powerful blueprint for scaled social innovation,” she said. “But it’s really just the beginning.” The same retinal imaging technology can examine multiple conditions simultaneously from a single scan, multiplying its diagnostic value.
AI co-scientist in action
Modern AI could be an accelerant. The aforementioned AI co-scientist has been featured in validation studies showing its potential. In work published in Cell in September, microbiologists José Penadés and Tiago Costa at Imperial College London tested whether AI co-scientist could solve a puzzle that had eluded them for years: how capsid-forming phage-inducible chromosomal islands (cf-PICIs), genetic elements that spread antibiotic resistance, could jump between unrelated bacterial species. They fed the system their unpublished data and a straightforward question. Within 48 hours, the AI’s top-ranked hypothesis matched the mechanism they’d spent a decade proving experimentally: cf-PICIs hijack diverse phage tails to expand their host range.
“I was really shocked,” Penadés told IEEE Spectrum. “At first I thought the AI had hacked into my computer.” The system hadn’t, but it had access to the team’s 2023 paper describing cf-PICIs, which contained clues to the mechanism. Still, the AI’s reasoning impressed the Imperial researchers enough that they’re now investigating one of its runner-up hypotheses.
In a second study published in Advanced Science in September, Stanford researcher Gary Peltz used AI co-scientist to identify drug repurposing candidates for liver fibrosis. The system suggested three compounds targeting epigenetic regulators. Peltz added two of his own picks and tested all five in human liver organoids. Two of the AI’s suggestions, including the FDA-approved cancer drug vorinostat, reduced fibrosis and promoted liver regeneration. Neither of Peltz’s candidates worked.
The system uses what Google calls “tournament evolution”: specialized AI agents generate hypotheses, debate their merits and rank them using an ELO-style scoring system over multiple rounds of simulated peer review.
While the technology is quickly moving, recent studies also expose the technology’s current limits. The Imperial team noted that AI co-scientist performed best when prior literature provided strong hints. For genuinely novel questions without precedent in published work, the system struggled. And like all AI tools, it requires expert oversight. Peltz noted that extracting useful hypotheses demands “careful prompting, iterative feedback, and a willingness to engage in dialogue with the AI.”
Chou warned that the technology sector faces a fork in the road. “When considering the future of technology and society, we face a choice: one path leads to a flood of AI-generated mediocrity, while the other offers a radical expansion of human knowledge,” she said.




Tell Us What You Think!
You must be logged in to post a comment.