
Keanu Reeves as Neo in The Matrix Reloaded (2003) dodges digital threats [Image licensed from ALAMY]
At ThreatLocker’s Zero Trust World event, CEO Danny Jenkins showcased this in an interview by sharing a video of himself speaking fluent Italian and Spanish on his smartphone. “I don’t know a word of Italian,” he said as a strikingly realistic video played that looked as though he had native-level proficiency.
Yes, it’s early days, but the technology is quickly improving. And it’s only a matter of time before synthetic fluency targets organizations with research-driven IP—think Silicon Valley giants, national labs, universities, pharma giants, and beyond—where cyber-risks are considerable. R&D institutions like Google (2010, Operation Aurora) and Merck (2017, NotPetya) have faced cyberattacks from nation-states, including Chinese and Russian APTs. Ransomware gangs also target research-focused hospitals, as seen in UCSF’s $1.14 million ransom payment (2020). Universities and government labs—such as Oak Ridge National Lab and COVID-19 research centers (targeted by Russian APT29 in 2020)—have been frequent victims of IP theft.
Casting a wide net

Danny Jenkins
While advanced AI attacks are possible today, “The bottom line is, it’s expensive,” Jenkins said. “Cybercriminals mostly cast a wide net, like fishing with a hundred lines out, waiting for a bite,” he said. “They’ll send 100,000 emails, wait for someone to bite, and get a reverse shell. Then they dig in.”
While for average businesses, such actors aren’t targeting you specifically. “But if a cybercriminal wants to make a million dollars, they’ll go after a bigger target,” Jenkins said. “That takes a lot more research.”
Cybercriminals don’t always operate haphazardly. While many cast wide nets—sending out thousands of phishing emails hoping someone bites—those aiming for bigger payouts take a more calculated approach. As Danny Jenkins, CEO of ThreatLocker, noted at the Zero Trust World event, “If a cybercriminal wants to make a million dollars, they’ll go after a bigger target. That takes a lot more research.”
Valuable targets can attract well-heeled adversaries
Consider this case in point: In mid-2020, Tesla narrowly escaped a ransomware attack that could have disrupted its operations at the Gigafactory in Sparks, Nevada—a sprawling facility central to its battery production and R&D efforts. The mastermind, 27-year-old Egor Igorevich Kriuchkov, arrived in the U.S. on a tourist visa in July, armed with a plan hatched by a cybercrime syndicate. Kriuchkov wasn’t starting from scratch—he targeted a Russian-speaking Tesla employee he’d first met in 2016, reconnecting via WhatsApp in July to exploit that prior bond. The gang had studied Tesla, zeroing in on its Nevada hub of battery R&D with a scheme to steal data and extort millions more, reportedly investing $250,000 in custom ransomware for the operation. Over weeks, Kriuchkov groomed the employee with drinks, dinners, and trips including one to Lake Tahoe, before offering $1 million to plant the malware via a USB drive. But the employee alerted the FBI instead—foiling the plot.
While USB attacks have long been notorious—think of the circa 2010 Stuxnet worm, where a USB drive delivered malware to sabotage Iran’s nuclear program, or the 2012 Shamoon attack on Saudi Aramco, where a USB-delivered virus erased data from over 30,000 machines—many organizations have locked down USB drives, assuming that’s game over. But Honeywell’s 2024 USB Threat Report says otherwise: 51% of industrial-focused malware attacks now target USB devices, a nearly six-fold leap from 9% in 2019. “That’s where people get confused,” Jenkins said. “They lock down USB drives, thinking it stops Rubber Duckies or O.MG cables,” referring to the whimsically-named penetration testing tools from Hak5. The Ducky, for instance, poses as a thumb drive that rapid-fires commands as a fake keyboard, or the O.MG cable, a tricked out USB or Lightning cable rigged for keystroke injection attacks. “Devices like those present themselves as keyboards to the system,” Jenkins said. If you don’t block keyboards, they’re allowed through.”
Secrets often don’t stay secret for long anymore
Jenkins underscored the need to secure endpoints, stating, “The bigger concern is nation-states stealing IP or embedding malicious code into software. That’s terrifying.” He explained that while encryption and multifactor authentication are important, they’re insufficient if the endpoint is compromised, as “everything runs from the endpoint… If malware runs on a compromised endpoint, it’s over.”
Jenkins remarked on the rapid exposure of sensitive information, saying, “Secrets don’t stay secret for long anymore.” They pointed to tools like DeepSeek and open AI models as key drivers, noting that often, “it’s not always about stealing; a lot of it’s just out there now.”
“We’re seeing more AI-enabled attempts, and you can’t always tell the intent. AI agents, AI malware, and programs are about to explode,” Jenkins warned. Already, even novice attackers can harness AI to craft malware on the fly, a capability he suggested could shift from a trickle to a flood. These AI-driven attacks will “be extremely effective for the first three or four years,” Jenkins predicted. “Then people will move toward zero trust. They’ll stop trusting emails, even videos.”

Rob Allen
AI may be reshaping the cybersecurity playbook, but that doesn’t mean AI tools alone are enough to protect your network, warns ThreatLocker’s chief product officer Rob Allen. Traditional antivirus software rely too heavily on detecting “known” threats—leaving a blind spot for fresh, AI-generated code.