Nicholas Smeele
PhD Candidate & Researcher
Based in The Netherlands

Nicholas Smeele
↓ Scroll

About me

Nicholas Smeele is a PhD candidate in econometrics and machine learning at Erasmus University Rotterdam (Erasmus School of Health Policy & Management) and Delft University of Technology (faculty of Technology, Policy and Management), where he is advised by Prof.dr. Esther de Bekker-Grob and Caspar Chorus in the Erasmus Choice Modelling Centre and BEHAVE research group.

His current research focuses on integrating classical econometric models, such as discrete choice models, with machine learning algorithms, such as neural networks, to understand human choice behaviors in morally sensitive decision contexts. It is not only Nicholas’ goal to develop artificial intelligence (AI) driven moral choice models to elicit preferences, capture heuristics and understand intentions as well as considerations that humans employ in their choice processes, he also aims to help governments and public (health) institutions align their policy decisions with our preferences and (moral) values. Nicholas is mainly interested in how humans make (public) health choices. In the field of healthcare, lots of morally sensitive questions and moral dilemmas can arise. For example, in the allocation of scarce health resources, decisions could have as potential consequences that one patient will die to save the life of another patient. By using insights from computational and economic modeling, artificial intelligence, (medical) ethics, and behavioral sciences, human-centered models can be designed to rigorously unravel human choice behaviors. This to inform policy makers to make better health policy decisions in a value-based way, aligned with human moral values.

Prior to his PhD candidacy position, Nicholas obtained his Master’s degree in Data Science – with cum laude - from the Erasmus University Rotterdam (Erasmus School of Economics). During his masters, he researched and worked – under supervision of Prof.dr. Bas Donkers – on algorithmic fairness to control and mitigate gender discriminations in deep artificial intelligence systems.