-Summary: The talk introduced the concepts of aleatoric and epistemic uncertainty. It compared various methods for uncertainty estimates according to several categories, such as performance, implementation effort etc. A simple, one variable toy dataset was used to evaluate these methods in practice. Some methods apparently showed a poor performance such as Monte-Carlo dropouts. I personally would have like to learn on why some methods performed better or worse on the dataset or not and how this generalizes to real-world datasets. However, based on later conversations with the speaker, this seems a tough problem for some of the methods used. What I definitely learned was how to give an easy explanation on the difference between aleatoric and epistemic uncertainty, and on quantile regression to a broad audience. And it was the first time I had been given such a systematic overview on uncertainty quantification.
0 commit comments