Checkmate on Drug Interaction Checkers
A molecular model-based system may be a more accurate solution
Drug Interaction Checker software is now readily available to physicians, pharmacists and the general public. A few clicks online will tell you whether your blood pressure medication will interfere with your anti-obesity drug — or will it? The data is easily accessed, but how accurate is the information?
Drug developers have very limited information about potential interactions of their new products with the wide array of already approved pharmaceuticals. Unless a subject in one of the relatively limited clinical trials happened to have a reportable interaction effect or unless a class of drugs generally has an interaction, not much is known. And it is not unusual for the detailed mechanism of a new drug to remain clouded: in some cases, there is very little explanatory data that might point to potential drug interactions.
Should we require more extensive drug testing to determine possible interactions? To determine whether a new drug interacts with any of the 100 most commonly prescribed drugs, alone or in combination, would require 100! (100x99x98x97…) additional trials. And, to be safe, multiply that number by 10!, for the 10 most common jointly occurring diseases, and by 10 again for the 10 most common genetic profiles — and even before you begin to consider the possible interactions with herbals, over-the-counter drugs and nutriceuticals, you have increased the cost of a new drug product astronomically.
We are left with the status quo alternate system, collecting and reporting post-market detection of interactions and side effects through databases like MEDRA and COSTART. However, this post hoc approach has two major flaws:
• First, it can only find problems after one or more people have suffered from potentially life threatening or quality of life reducing symptoms. And, as recalls in recent years of such popular drugs as Phen Fen, with over $21 billion set aside to cover liabilities, and Vioxx, with costs still being counted and exceeding $4 billion, have demonstrated, these dangerous interactions with other drugs or with multiple diseases can be very widespread before the trends are identified and acted upon.
• Second, by definition there is a time delay between any post hoc system and the current moment. A drug interaction checker can only search the database of previously reported interactions. If you fill a prescription on January 1, you will be checking against accumulated problems that may have occurred over the previous six months or a year. Even if the software is immediately updated on a daily basis it will be based on data that has been observed, collected, collated, reported to the FDA, reviewed, discussed and finally disseminated. And, if your prescription includes sufficient medication for three or more months, you can add that period to the delay between observed problems and the patient’s awareness of the danger.
The answer is to rely on neither extensive and expensive drug interaction testing prior to New Drug Application submission and approval, nor solely on post-marketing reporting and analysis of observed interactions. What is needed instead is a molecular model-based system that can inexpensively but accurately predict drug interactions.
Such a system, tied to the drug development molecular modeling systems currently in use at sophisticated discovery organizations, would greatly expand the probable interactions and potentially identify the signs of these interactions. Obviously, such an endeavor would be large scale, perhaps the scale of the Human Genome Project. The economic payoff would be equally large, reducing risks by replacing unknowns for both industry developers and individual consumers with a predictive understanding of drug interactions. Testing against known interactions will allow a self-defined check system.
Developing a Human Genome-type project such as this would be massive, larger than most individual firms could support. Government oversight would be required, but funding should be widespread among industry members and global in nature. The benefits would be valuable and should be publicly available. The results also could be provided as they are encountered, producing benefits throughout the project’s life.
It is probable that this sort of system could be developed using automated software development processes. This could draw upon many techniques from computer science, for example, in the areas of program verification, automated reasoning, model checking, static analysis, symbolic evaluation and machine learning and apply them to the verification and validation of the software, as well as the code generation itself.
A second large aspect of such a large-scale project is the requirement for massive computing capabilities, as well as the processing time itself. Other considerations must include how to store, distribute and provide immediate access to the information. This will be well beyond the era of pocket technology, unless the miniaturization of technology is able to maintain its recent pace.
A call for development of such a predictive cataloging system is approaching at high speed. Contemplate the preventive health potential of such an endeavor and the avoidance of liability and risk management expenses not only accrued, but actually experienced, by the pharmaceutical industry, as well as health care providers. Also consider the ethical dimension of acting when we know we can produce a greater good, as opposed to not taking action to save the costs. The time is near, if not now, to have the significant meeting where an idea changes from thought to action.
Widely available and useful drug information software contains limited and post hoc developed data for providing drug interaction possibilities. To obtain more reliable information on the interactions of all drugs for a wide array of demographics, most common diseases and genetically different people will be a huge enterprise. Either delays of new drugs to the market due to expanded clinical trials or a very large-scale system development will be required to produce more information prior to when the first prescriptions are written for new drugs. Ethically, more information can be produced and, therefore, should be produced in order to avoid the pain, suffering and costs of dangerous drug interactions in a post-marketing setting.
Sandy Weinberg is an associate professor of health care management at Clayton State University and a senior consultant at Tunnell Consulting. Ron Fuqua is an assistant professor of health care management at Clayton State University. They may be reached at editor@ScientificComputing.com.