Damen JAAG, Debray TPA, Pajouheshnia R, Reitsma JB, Scholten RJPM, Moons KGM, Hooft L. Empirical evidence on the impact of study characteristics on the performance of prognostic models: a meta-epidemiological study. Presented at the Methods for Evaluation of Medical Prediction Models, Tests and Biomarkers (MEMTAB) 2018 Symposium; July 2, 2018. Utrecht, The Netherlands. [abstract] Diagn Progn Res. 2018 Jul 2; 2(Suppl 1):29-30. doi: 10.1186/s41512-018-0036-3


BACKGROUND: Meta-epidemiological studies have shown that shortcomings in study design can lead to biased estimates of treatment effects and diagnostic test accuracy. It remains unclear to what extent study characteristics may affect estimates of prognostic model performance.

OBJECTIVES: To assess the relation between study characteristics and the results of external validation studies of prognostic models.

METHODS: We searched electronic databases for systematic reviews of prognostic models. Reviews from non-overlapping clinical fields were selected if they reported common performance measures (concordance (c)-statistic or ratio of observed over expected number of events (OE ratio)) from ten or more validations of the same model. From the included validation studies we extracted study design features, population characteristics, methods of predictor and outcome assessment, and the aforementioned performance measures. Random effects meta-regression was used to quantify the association between study characteristics and model performance.

RESULTS: We included ten reviews, describing a total of 224 validations. Associations between study characteristics and model performance were heterogeneous across reviews. C-statistics were most associated with population characteristics and measurement of predictors and outcomes, e.g. validation in a continent different from the development study resulted in a higher c-statistic, compared to validation in the same continent (difference in logit c-statistic 0.10 [95% CI 0.04, 0.16]), and validations with eligibility criteria comparable to the development study were associated with higher cstatistics compared to narrower criteria (difference in logit c-statistic 0.21 [95% CI 0.07, 0.35]). Using a case-control design was associated with higher OE ratios, compared to using cohort data (difference in log OE ratio 0.97 [95% CI 0.38, 1.55]).

CONCLUSION: Variation in performance of prognostic models appears mainly associated with variation in case-mix, study design, and predictor and outcome measurement methods. Researchers validating prognostic models should carefully take these study characteristics into account when interpreting the achieved performance of prognostic models

Share on: