A comparison of hyperparameter tuning procedures for clinical prediction models: A simulation study

Author:

Dunias Zoë S.1ORCID,Van Calster Ben23ORCID,Timmerman Dirk24ORCID,Boulesteix Anne‐Laure56ORCID,van Smeden Maarten1ORCID

Affiliation:

1. Julius Center for Health Sciences and Primary Care University Medical Center Utrecht Utrecht The Netherlands

2. Department of Development and Regeneration KU Leuven Leuven Belgium

3. Department of Biomedical Data Sciences Leiden University Medical Center Leiden The Netherlands

4. Department of Obstetrics and Gynecology University Hospitals Leuven Leuven Belgium

5. Institute for Medical Information Processing, Biometry and Epidemiology University of Munich Munich Germany

6. Munich Center for Machine Learning (MCML), LMU Munich Munich Germany

Abstract

Tuning hyperparameters, such as the regularization parameter in Ridge or Lasso regression, is often aimed at improving the predictive performance of risk prediction models. In this study, various hyperparameter tuning procedures for clinical prediction models were systematically compared and evaluated in low‐dimensional data. The focus was on out‐of‐sample predictive performance (discrimination, calibration, and overall prediction error) of risk prediction models developed using Ridge, Lasso, Elastic Net, or Random Forest. The influence of sample size, number of predictors and events fraction on performance of the hyperparameter tuning procedures was studied using extensive simulations. The results indicate important differences between tuning procedures in calibration performance, while generally showing similar discriminative performance. The one‐standard‐error rule for tuning applied to cross‐validation (1SE CV) often resulted in severe miscalibration. Standard non‐repeated and repeated cross‐validation (both 5‐fold and 10‐fold) performed similarly well and outperformed the other tuning procedures. Bootstrap showed a slight tendency to more severe miscalibration than standard cross‐validation‐based tuning procedures. Differences between tuning procedures were larger for smaller sample sizes, lower events fractions and fewer predictors. These results imply that the choice of tuning procedure can have a profound influence on the predictive performance of prediction models. The results support the application of standard 5‐fold or 10‐fold cross‐validation that minimizes out‐of‐sample prediction error. Despite an increased computational burden, we found no clear benefit of repeated over non‐repeated cross‐validation for hyperparameter tuning. We warn against the potentially detrimental effects on model calibration of the popular 1SE CV rule for tuning prediction models in low‐dimensional settings.

Publisher

Wiley

Subject

Statistics and Probability,Epidemiology

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3