Papers
arxiv:2003.03879

An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods

Published on Mar 9, 2020
Authors:
,
,
,

Abstract

Despite apparent human-level performances of <PRE_TAG>deep neural</POST_TAG> networks (DNN), they behave fundamentally differently from humans. They easily change predictions when small corruptions such as blur and noise are applied on the input (lack of <PRE_TAG>robustness</POST_TAG>), and they often produce confident predictions on out-of-distribution samples (improper uncertainty measure). While a number of researches have aimed to address those issues, proposed solutions are typically expensive and complicated (e.g. Bayesian inference and adversarial training). Meanwhile, many simple and cheap regularization methods have been developed to enhance the generalization of classifiers. Such regularization methods have largely been overlooked as baselines for addressing the <PRE_TAG>robustness</POST_TAG> and uncertainty issues, as they are not specifically designed for that. In this paper, we provide extensive empirical evaluations on the <PRE_TAG>robustness</POST_TAG> and uncertainty estimates of image classifiers (CIFAR-100 and ImageNet) trained with state-of-the-art regularization methods. Furthermore, experimental results show that certain regularization methods can serve as strong baseline methods for <PRE_TAG>robustness</POST_TAG> and uncertainty estimation of <PRE_TAG>DNNs</POST_TAG>.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2003.03879 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2003.03879 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2003.03879 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.