
[Image from OpenAI’s image generator]
Rao, now VP of AI at Databricks, highlights rapid AI progress at NTT Research‘s Upgrade 2025 event in San Francisco. “We observed… a year-over-year cost reduction of 4x [since 2021],” Rao explained. “That’s actually a steep exponential… For those who know Moore’s Law, it’s roughly a 40% annual improvement. This was 400%—ten times faster.”
“So that’s very profound, when you think about it, right?” he continued. This rapid pace, he elaborated, fundamentally changes the economics of AI development. “When something is changing that quickly, projects that might seem prohibitively expensive today—maybe costing $100 million—could become feasible much faster than anticipated. That same project could drop to $25 million within a year, and maybe $6 million in two years.” That dramatic cost reduction, he stressed, is “huge” for enabling widespread adoption and innovation.
On AI hype vs. reality

Naveen Rao, Ph.D.
Despite efficiency gains, Rao offers pragmatism on AI’s current enterprise impact. “What we’re observing today is not the breakthrough automation… discussed a year or two ago,” he said.
He likens the reality to autonomous driving, which proved much harder than early projections. “It turned out to be a very hard problem… and still hasn’t been solved,” Rao said. “But that doesn’t mean there’s no value…”
Coding assistants are advancing faster than most AI applications. “I think we’ve gotten the user interface right… and we can provide insights to programmers,” Rao said. He sees similar “co-pilots” (the general concept) emerging in legal, HR, etc. “It’s not going to eliminate jobs—it’s going to make each person more efficient,” he stated. Users will be able to do tasks faster and with more streamlined information access than in the past.
In software engineering, published productivity gains are significant. In 2024, Google CEO Sundar Pichai says more than a quarter of its code was AI-written. GitHub research in 2023 found its Copilot tool helped developers complete tasks 55% faster. Even if the average productivity boost per coder is on the order of 8%, even that level is “huge when applied across 1,000 people.”
But just throwing AI at a problem won’t necessarily help. “You need people who understand the current workflows,” Rao said. “If you’re a lawyer using LexisNexis, you want people designing who understand that system…it’s almost like you need a specialized product manager.”
On UI and workflow: Meeting users where they are
Rao argues AI’s immediate value often lies in integrating it into existing workflows—”meeting users where they are.” Standalone AI tools create cumbersome processes, like copying huge SQL query results to a separate chat interface. Rao called such workflows “horrible.”
The solution is embedding AI into existing tools. Databricks, for instance, built an “AI Query” extension for SQL, letting users run AI functions like sentiment analysis directly within their SQL environment. “That is one of the fastest growing things we have,” Rao noted. “You’re meeting the user where they are… writing SQL.” Similarly, tools like Databricks AI/BI Genie translate natural language to SQL, aiding non-coders.
On biological vs. silicon inefficiency
Rao’s background in neuroscience provides a unique—and critical—lens through which he views current AI hardware. This perspective leads him to a stark conclusion about the workhorse of modern AI: the GPU.
“As a neuroscientist, a [GPU] is horribly inefficient,” Rao stated bluntly. “Honestly, it’s a really crappy set of computational primitives for AI.”
Why the harsh assessment? Rao contrasts the GPU’s brute-force methods with the brain’s elegant efficiency. He points to operations like “all-reduce,” common in training large models, which involve communicating vast amounts of matrix data at significant computational cost. “Our brains don’t do anything like that,” he explained. “They do things in approximations and other ways that are much more efficient.” Look no further than energy consumption: the human brain operates on roughly 20-30 watts, while GPU-heavy AI training clusters consume megawatts—orders of magnitude more.
Part of this inefficiency, Rao suggests, stems from how we utilize silicon. “Silicon is great… but we actually don’t use the richness of representations,” he observed. Current digital systems largely reduce the complex physical properties of silicon devices—capable of representing a wide range of values—to simple binary states (0s and 1s). While this simplification enabled the digital revolution and Moore’s Law scaling, Rao believes “there’s an opportunity to kind of re-examine the physical substrate,” particularly for AI. Harnessing the inherent analog capabilities of silicon or developing entirely new architectures inspired by the brain’s efficient, approximate methods represents a significant, long-term R&D challenge—and potentially, the path to truly efficient artificial intelligence.
On AI agents and future intent
Rao distinguishes current “AI agents” (systems linking tools/LLMs) from true autonomous agency. He views today’s agents as “RPA [Robotic Process Automation] on steroids”—sophisticated automation for predefined tasks.
Genuine agency requires “the ability to… self-learn from your mistakes,” Rao argues, a capability largely missing today. A key missing piece is intent. Current LLMs “don’t intend to do anything,” hallucinations are process flaws, not deliberate actions.
He contrasts this with humans: “typically, there’s an intent,” Rao explained. Humans might forget. “But typically there is intent. If [a person is] lying, [that person is] intending to lie.” Current large language models, however, ‘don’t intend to do anything,’ Rao stated; their errors or ‘hallucinations’ are process flaws. Developing AI that can form intent, act, self-check, and learn is the next frontier, likely “maybe five or ten years out,” Rao predicts.
Finding AI signal amidst the noise
Rao emphasizes that navigating the AI hype requires a critical filter honed by research. “Being able to understand what’s real and what’s not is super important,” he stressed.
This clarity underpins effective strategy. For CIOs, it means focusing squarely on “Evaluation—understanding success” and defining metrics before deployment. For founders, Rao noted the challenge of perspective, cautioning: “Understanding AI is still a fundamental problem.”
People either think it’s much better than it is, or they don’t understand the impact. I don’t see much in the middle, which is where the truth lies
Whether assessing technology or building a new venture, Rao extolls the benefits of a laser-like focus pointed at the right target: “Don’t focus on competition. Think about what you’re actually trying to build and who you’re building it for.”