Results of preclinical studies by investigators at the Medical University of South Carolina (MUSC) reported in the August 2016 issue of Arthritis & Rheumatology demonstrate for the first time that including novel biomarkers in lupus nephritis (LN) prognostic models significantly increases their power to predict therapeutic efficacy. Identifying biomarker models with sufficient predictive power is a critical step toward developing clinical decision-making tools that can rapidly identify patients who require a change in therapy and potentially reduce onset of renal fibrosis during induction therapy.
Approximately half of all patients with systemic lupus erythematosus (SLE) develop LN, an immune complex-mediated glomerulonephritis. Lupus nephritis, in turn, leads to renal failure in up to 50% of patients within five years. American College of Rheumatology guidelines recommend changing LN treatment after six months of induction therapy if response to therapy is not achieved. However, ‘response to therapy’ is not clearly defined and renal damage can occur during the six-month induction period.
Currently, clinicians monitor response to treatment via blood pressure measurements, serum complement levels, anti-double-stranded DNA (anti-dsDNA) antibody levels, urinary sediment, urinary protein-to-creatinine ratios, and surrogates of renal function. Unfortunately, predicting disease progression is difficult using these traditional biomarkers due to their low sensitivity and high LN heterogeneity at presentation.
Even when machine learning models are employed, traditional biomarkers are only 69% accurate in predicting a LN diagnosis among SLE patients. There is a need for individualized, decision-support tools that can better define ‘therapeutic response’ at the start of therapy and allow clinicians to tailor induction therapy to disease severity to prevent renal damage and unnecessary drug toxicity.
“We saw our colleagues’ frustration in trying to come up with predictive models,” said Jim C. Oates, M.D., Associate Director of the MUSC Clinical and Translational Research Center, Associate Professor of Rheumatology, and senior author on the article. “The traditional markers we use in clinic today have quite limited predictive capacity. All lupus patients have varying degrees of kidney damage and levels of involvement of the different kidney structures. So, we wanted to account for this heterogeneity and the stages of disease progression. We wanted to include markers for pathways of inflammation as well as for damage.”
The research team hypothesized that a targeted panel of urinary biomarkers reflecting initial resident and inflammatory cell activation (cytokines), signals for homing to the kidney (chemokines), activation of inflammatory cells (growth factors), and damage to resident cells, combined with artificial intelligence/machine learning modeling, might provide an early LN decision-support tool that could predict outcomes better than standard biomarkers alone. The team also chose to assess urine biomarkers rather than serum/plasma markers to increase the tool’s sensitivity and specificity to signals of renal (rather than systemic) processes.
Urine samples from 140 patients with biopsy-proven LN who had not yet started induction therapy were analyzed for a panel of novel biomarkers using pre-mixed, commercially available kits. Univariate, receiver operating characteristic (ROC) curves were generated for each biomarker and compared to ROC area under the curve (AUC) values from machine learning models developed using random forest algorithms. Outcome models using novel biomarkers plus traditional clinical markers demonstrated greater AUC and significance compared to models developed with traditional markers alone ([AUC 0.79; P<0.001] vs. [AUC 0.61; P=0.05], respectively). The combined models also demonstrated greater power to correctly predict LN therapy outcomes (responder versus non-responder) than models using only traditional markers (76% vs. 27%, respectively [P<0.002]).
The team identified chemokines, cytokines, and markers of cellular damage as most predictive of LN therapy response. Race, anti-double-stranded DNA antibodies, and induction medication did not significantly contribute to the model.
“We were somewhat surprised by some of the analytes that were important in the model,” said Oates. “One traditional marker, protein-to-creatinine ratio, was the third most important, and a standard kidney function measure was the ninth. I was also surprised to see interluekin-8 so high. This is in keeping with recent publications highlighting the importance of neutrophils in the pathogenesis of lupus, however.”
Including multiple mechanisms of disease pathogenesis and cellular damage likely provides a more effective diagnostic approach by better reflecting the multi-stage, heterogeneous nature of LN. This is the first study to combine a broad biomarker panel with machine learning techniques to optimize disease outcome models. “This could apply to any model where there is kidney inflammation leading to damage,” said Oates. “It’s proof of concept for other kidney diseases that you can take a discovery model and incorporate machine learning to develop and validate predictive models.”
The team is now testing other biomarkers and applying the model in a larger patient population to ensure external validity and improve power. They are also exploring other inputs.
“Our next approach is to harness existing data in the medical record to enhance predictions,” said Oates. “This is much more immediately translatable in the clinic than getting through a long FDA validation process and the industry pipeline. Using medical record data is cheaper, and there are patient and system factors in the medical record that you can’t measure with an assay, such as economic and societal disparities, which affect outcomes. This approach could also be used to enhance biomarker predictive models”