
[Gorodenkoff/Adobe Stock]
Navigating the double-edged sword of genAI in research
“When people talk about pitfalls, they mention misinformation, errors, confidentiality concerns, and worries about getting irrelevant answers,” said Mirit Eldor, managing director, life sciences solutions at Elsevier. A large majority of respondents (91%) expect generative AI tools to only draw from high-quality trusted sources. This expectation stems from real concerns, as Eldor explains: “For example, in the early days, we tried different things, including some model training ourselves. If you ask a broad database a medical question, the answer can be completely inaccurate if the database includes animal health information alongside human health data.”
The survey shows that over 50% of companies now have rules in place, often forbidding employees from uploading information to off-the-shelf generative AI tools. “It’s really been a discovery process for many companies in this last year or so,” Eldor said. “This survey is a snapshot of where things are right now,” she added. Given the quickly moving landscape since ChatGPT debuted, “a day in generative AI was like a year because things were moving so quickly,” Eldor said.
Mirit Eldor
“It used to be that you had your end-user bench scientists who do the research and read journals – they didn’t use AI. They read, they think, they experiment, and then they write. Then you had your computational scientists who build models, and they’re really into AI. There used to be this division. And then comes this new technology of generative AI and changes this.”
— Mirit Eldor
GenAI expertise is rare but there are regional differences
Despite the substantial long-term genAI expectations, only 11% of respondents consider themselves “very familiar” with the technology. Among those who are familiar, about half (54%) have actively used genAI tools while nearly one-third (31%) of respondents have actively used AI for specific work-related purposes, with significant regional variations: 39% in China and 22% in India.
“People are excited, but it still feels like it’s more early days than I expected,” Eldor said. “I thought more people would be experimenting already and that there would be more tangible examples of how AI is used.”
Building trustable machines
Respondents expect generative AI tools to draw from high-quality trusted sources (91%). “What we don’t like is a black box where you don’t understand why the model gave you a particular answer,” Eldor said. The way the model works has to be explainable, and the sources of content have to be transparent.”
Elsevier, in its AI tools, uses ontologies and taxonomies to give users “a replicable, consistent answer.” Another increasingly used technology used to “ground” genAI systems is Retrieval-Augmented Generation (RAG), which involves searching a local database or knowledge graph to gather relevant information related to the input query. It then hands over the retrieved information to a generative AI system to formulate a response. “I think the future is using RAG (Retrieval-Augmented Generation) for retrieval and summarization, but keeping the content protected,” Eldor said. “The content is still high quality, good science, and ethical.”
One of the key challenges in AI adoption is ensuring that the results are both explainable and replicable. “Today, with many AI systems, you don’t understand the answer you get, and the next time you ask the question, you might get a slightly different answer that you can’t quite compare,” Eldor said. “Explainable means you link it to the source, explain where it comes from, and why this is the answer. It’s also about the replicability of getting a consistent answer every time you ask the questions.”
This need for transparency underscores a fundamental shift in the integration of AI in research. It’s no longer enough to have powerful algorithms crunching data; those algorithms must also be accountable to the researchers who rely on their output. “It’s the expertise that helps really pull it together in the world of AI and generally with new technologies,” Eldor said. “When we think about how to approach complex issues, it’s that combination of the best content we can get, the essence of the subject matter expertise, and the technology. We always think about what is the best combination of these three. Without the expertise, it doesn’t work.”
For more coverage on AI, check out the following articles from our archives:
How can researchers using AI hope to achieve a competitive advantage (for example in pharmaceuticals) if they are all using AI tools developed on the same data set?