Positive-Unlabeled (PU) learning works by considering a set of positive samples, and a (usually larger) set of unlabeled ones. This challenging setting requires algorithms to cleverly exploit dependencies hidden in the unlabeled data in order to build models able to accurately discriminate between positive and negative samples. We propose to exploit probabilistic generative models to characterize the distribution of the positive samples, and to label as reliable negative samples those that are in the lowest density regions with respect to the positive ones. The overall framework is flexible enough to be applied to many domains by leveraging tools provided by years of research from the probabilistic generative model community. In addition, we show how to create mixtures of generative models by adopting a well-known bagging method from the discriminative framework as an effective and cheap alternative to the classical Expectation Maximization. Results on several benchmark datasets show the performance and flexibility of the proposed approach.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Ensembles of density estimators for positive-unlabeled learning|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||1.1 Articolo in rivista|