The partnership between OpenAI and Los Alamos goes beyond previous text-based assessments of AI in biological contexts. OpenAI highlighted two aspects that set this research apart, noting the “research will involve real laboratory work,” and provide “a more accurate assessment of genAI’s capabilities and limitations in biological research.” OpenAI emphasized that while written exercises and responses related to compound synthesis and distribution provided some insight, they don’t completely represent the practical skills necessary for conducting hands-on biological laboratory work.
The study will make use of GPT-4o, OpenAI’s latest multimodal AI model that can process visual and voice inputs alongside text. (The current version of the publicly available model, which is faster than its predecessor has attracted some criticism for being “partly finished.” A more powerful is said to be slated to launch later this year.)
The GPT-4o has improved multimodal capabilities, which is a theme the research pact with Los Alamos will explore. OpenAI explained on its website, “For example, a user less familiar with all the components of a wet lab setup can simply show their setup to GPT-4o and prompt it with questions, and troubleshoot scenarios visually through the camera instead of needing to convey the situation as a written question.”
GenAI could also lower the barrier to entry for biohacking
The potential risks of AI-enabled biological threats are worrying as large language models advance and become increasingly multimodal. Threat actors could use such systems to design novel pathogens or toxins with enhanced virulence or resistance to treatments. In 2022, MIT’s Kevin Esvelt testified to a U.S. Senate Homeland Security and Governmental Affairs Committee highlighted the risk of “numerous pandemic-capable viruses” potentially enabling individual terrorists to “gain the ability to unleash more pandemics at once than would naturally occur in a century.”
Large language models could potentially lowering the barrier to entry for creating biological weapons by providing step-by-step instructions or troubleshooting assistance to non-experts. While current closed models from tech firms have significant guardrails, those protections aren’t infallible. And underground models without guardrails also exist, making it easier for for bad actors to use them to generate convincing misinformation about biological threats, potentially causing panic or interfering with response efforts.
A 2023 Congressional report underscored the threat.
“For example, as part of a recent study, an AI model for drug development was retrained to design molecules for toxicity instead of designing against them. The study reported that in less than six hours the AI model generated 40,000 molecules that scored within their desired threshold of toxicity and bioactivity, which included the nerve agent VX, other known chemical warfare agents, and new molecules that were predicted to have even higher toxicities.”
Despite such potential risks, most prior studies have explored on text-based interactions rather than explored their risks in real-world laboratory settings.
Approach and methodology of the Los Alamos/OpenAI collaboration
OpenAI’s Preparedness Framework will guide the biothreat evaluation, providing a foundation to assess and mitigate potential risks associated with advanced AI systems. The researchers will exploit proxy tasks and materials to assess AI’s influence protocol execution and troubleshooting. The researchers expect this approach to allow them to conduct a realistic assessment without risking the creation or handling of actually dangerous substances.
Perspectives on the collaboration
Erick LeBrun, a research scientist at Los Alamos, emphasized in a press release the dual nature of AI’s potential in the field of biosecurity: “The potential upside to growing AI capabilities is endless. However, measuring and understanding any potential dangers or misuse of advanced AI related to biological threats remain largely unexplored.”
Mira Murati, OpenAI’s Chief Technology Officer, highlighted the alignment of this partnership with OpenAI’s broader mission: “As a private company dedicated to serving the public interest, we’re thrilled to announce a first-of-its kind partnership with Los Alamos National Laboratory to study bioscience capabilities.”
Tell Us What You Think!