In the early 1990s, the internet seemed poised to improve our lives by democratizing knowledge, publishing, and communication. While it did achieve many of these goals, it also introduced security risks ranging from malware to phishing. The online world of 2024 feels more like a war zone than a digital playground, “If you connect a computer to the internet, it will be attacked within the next few minutes. That’s just how things are. In AI, we’re not there yet, thankfully, and that’s why we’re doing what we are doing to protect AI systems,” explains Edmon Begoli, director of the Center for Artificial Intelligence Security Research at Oak Ridge National Laboratory (ORNL).
A new frontier for AI
AI in the 2020s evokes the same mixture of promise and menace. It enables everything from more accurate cancer diagnoses — transforming healthcare — to fully autonomous cars, yet it can also power weapon systems or produce toxic substances.
In response, ORNL’s AI Initiative aims to magnify AI’s benefits while minimizing its hazards, drawing on the lab’s extensive experience in high-performance computing (HPC) and national security. Researchers at ORNL already use AI to control multimillion-dollar scientific instruments, search for new materials strong enough to withstand extreme conditions and develop next-generation drugs.
ORNL and AI have a long historyThe lab’s AI roots stretch back to 1979 with the Oak Ridge Applied Artificial Intelligence Project. Modern AI took off at ORNL with the rise of graphics processing units (GPUs) —hardware created to speed up computer graphics but later harnessed for scientific computing and machine learning. In 2012, ORNL introduced the Titan supercomputer, which is among the world’s fastest. It was succeeded by two top-ranked supercomputers, Summit and Frontier, powered by GPUs. Frontier, the first exascale system, can complete more than a quintillion calculations per second — an almost unimaginable speed.
Making AI secure, trustworthy, and efficient
As the nation’s largest multidisciplinary national lab, ORNL integrates these HPC resources with various scientific and security programs. This synergy, Balaprakash notes, lets them tackle “tsunamis of data” in complex research environments, from monitoring experimental facilities to analyzing national security threats. The AI Initiative focuses on three areas:
- Security: AI can be weaponized in much the same way criminals exploit the internet. Begoli’s team at ORNL’s Center for Artificial Intelligence Security Research studies threats such as data poisoning, evasion attacks, and deepfake misuse. Their goal is to safeguard AI systems before attacks become commonplace.
- Trustworthiness: Researchers work to ensure that AI’s decisions remain transparent, reliable, and free from bias. This is particularly important in fields like healthcare, where mistakes can be life-threatening.
- Energy Efficiency: As AI models grow, so does their power consumption. ORNL scientists develop methods to optimize algorithms and hardware, reducing environmental impacts.
Looking Ahead
ORNL’s dual focus on scientific innovation and national security drives its work in AI. At the intersection of HPC and real-world applications — including drug discovery, autonomous systems, and materials science — the lab is preparing for the next wave of AI challenges.
“As we learned from the evolution of the internet,” Begoli says, “technology that can empower us can also harm us if we ignore its vulnerabilities. That’s why ORNL is pushing to build AI that we can trust.”
Tell Us What You Think!
You must be logged in to post a comment.