PhD position Temporal Performance Deterioration in AI-based Prediction Models
Join the Data Science group at the Julius Center (UMC Utrecht) to develop, evaluate, and implement innovative AI-based prediction methods that make real impact in healthcare. Work on cutting-edge research in a collaborative environment.
This PhD position is part of the VIDI project The MOT for safe and effective predictive AI in healthcare: methods for periodic tests and revision, funded by ZonMW. This project brings together experts in prediction modeling, longitudinal data analysis and data science/AI and aims to yield novel insights relevant to how we can better monitor the performance of (AI based) prediction models over time in healthcare practice.
The PhD candidate will:
The Data Science team at the Julius Center is a growing group of researchers working on methods and applications of AI in healthcare. The PhD candidate will be embedded in the AI methods lab of the UMC Utrecht. Furthermore, you will work in close collaboration with clinical experts and experts on deployment and quality control of AI in the UMC Utrecht.
You will work together in a diverse team of excellent researchers in the field of prediction models and longitudinal data analysis at the Julius Center. The supervision team will consist of Dr Maarten van Smeden, Prof Dr Carl Moons, Dr Anne de Hond and Dr Nicole Erler.
Background
Prediction models based on Artificial Intelligence (AI) play an increasingly important role across medical specialities, with the aim to support medical decision making for individual patients. While it is widely recognized that AI tools need periodic tests and maintenance (“updating”) to guarantee safety and effectiveness in medical decision making, there is currently no agreement on how and how frequent such tests must be done. Using in-depth methods research and real-world use cases of implemented AI-based prediction models, this project explores the frequency at which AI-based prediction models need testing, and to what degree deteriorating predictions can be anticipated and prevented. The ambition is to provide a new framework with concrete guidance on performing periodic tests and maintenance: MOT, not for motor vehicles, but for safe and effective AI in healthcare.
Currently, the predictive performance of prediction models is often evaluated in a way that ignores that performance may change over time, suddenly (e.g. due to a sudden change in patient management policy, newly available treatment), gradually (e.g. due to gradual changes in disease prevalence, patient-mix), recurring (e.g. seasonal effect), or a combination. Detection of such temporal changes and trends is important when monitoring the performance of prediction models and can aid the decision whether a (one-time) model revision may be necessary and sufficient to maintain effective and safe prediction. Secondly, there is currently a lack of methods available that can be used to detect or even foresee deteriorating performance at an early stage. New methods are going to be developed to facilitate early adjustments of foreseen deteriorated predictive performance to generate indications of the shelf life of new (AI based) prediction models.
Profile
© BSL Media & Learning, onderdeel van Springer Nature