Risk analysts and project teams must rely on expert judgment to collect subjective probabilities and ranges of potential cost and schedule impacts, especially when they are performing quantitative risk analyses. The main reason for using subject matter experts is the lack of reliable historical data associated with risk impacts. Research shows that subjective probabilities and risk impact ranges consistently yield overconfident and underconfident results, which, in turn, generate inaccurate cost values at selected confidence levels and confidence intervals. 

This paper explores the limitations of current elicitation approaches to collect and use subjective probabilities and impact ranges to assess uncertainty and risks. It provides several examples of different calibration assessment results and their adequate use to improve the strength of risk input data. It also presents a case for risk analysts to use sound scientific rigor in respect to inputs when performing qualitative risk assessments and quantitative risk analyses in support of decision-making. The author suggests the use of calibration assessment in any modeling approaches using subjective inputs whether they be decision trees, parametric models, Monte Carlo simulation, reference class forecasting, or system dynamics. 

By: Francisco Cruz Moreno, PE


Download Publication