Automatic learning research focuses on the development of methods capable of extracting useful information from a given dataset. A large variety of learning methods exists, ranging from biologically inspired neural networks to statistical methods. A common trait in these methods is that they are parameterized by a set of hyperparameters, which must be set appropriately by the user to maximize the usefulness of the learning approach. In this paper we review hyperparameter tuning and discuss its main challenges from an optimization point of view. We provide an overview on the most important approaches for hyperparameter optimization problem, comparing them in terms of advantages and disadvantages, focusing on Gradient-based Optimization.
Methods for Hyperparameters Optimization in Learning Approaches: an overview
L. Selicato
;N. Del Buono;F. Esposito
2020-01-01
Abstract
Automatic learning research focuses on the development of methods capable of extracting useful information from a given dataset. A large variety of learning methods exists, ranging from biologically inspired neural networks to statistical methods. A common trait in these methods is that they are parameterized by a set of hyperparameters, which must be set appropriately by the user to maximize the usefulness of the learning approach. In this paper we review hyperparameter tuning and discuss its main challenges from an optimization point of view. We provide an overview on the most important approaches for hyperparameter optimization problem, comparing them in terms of advantages and disadvantages, focusing on Gradient-based Optimization.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.