Three different strategies in order to re-train classifiers, when new labeled data become available, are presented in a multi-expert scenario. The first method is the use of the entire new dataset. The second one is related to the consideration that each single classifier is able to select new samples starting from those on which it performs a miss- classification. Finally, by inspecting the multi expert system behavior, a sample misclassified by an expert, is used to update that classifier only if it produces a miss-classification by the ensemble of classifiers. This paper provides a comparison of three approaches under different conditions on two state of the art classifiers (SVM and Naive Bayes) by taking into account four different combination techniques. Experiments have been performed by considering the CEDAR (handwritten digit) database. It is shown how results depend by the amount of the new training samples, as well as by the specific combination decision schema and by classifiers in the ensemble.

Three different strategies in order to re-train classifiers, when new labeled data become available, are presented in a multi-expert scenario. The first method is the use of the entire new dataset. The second one is related to the consideration that each single classifier is able to select new samples starting from those on which it performs a missclassification. Finally, by inspecting the multi expert system behavior, a sample misclassified by an expert, is used to update that classifier only if it produces a miss-classification by the ensemble of classifiers. This paper provides a comparison of three approaches under different conditions on two state of the art classifiers (SVM and Naive Bayes) by taking into account four different combination techniques. Experiments have been performed by considering the CEDAR (handwritten digit) database. It is shown how results depend by the amount of the new training samples, as well as by the specific combination decision schema and by classifiers in the ensemble. © 2012 IEEE.

Benchmarking of Update Learning Strategies on Digit Classifier Systems

BARBUZZI, DONATO;IMPEDOVO, DONATO;PIRLO, Giuseppe
2012-01-01

Abstract

Three different strategies in order to re-train classifiers, when new labeled data become available, are presented in a multi-expert scenario. The first method is the use of the entire new dataset. The second one is related to the consideration that each single classifier is able to select new samples starting from those on which it performs a missclassification. Finally, by inspecting the multi expert system behavior, a sample misclassified by an expert, is used to update that classifier only if it produces a miss-classification by the ensemble of classifiers. This paper provides a comparison of three approaches under different conditions on two state of the art classifiers (SVM and Naive Bayes) by taking into account four different combination techniques. Experiments have been performed by considering the CEDAR (handwritten digit) database. It is shown how results depend by the amount of the new training samples, as well as by the specific combination decision schema and by classifiers in the ensemble. © 2012 IEEE.
2012
978-0-7695-4774-9
Three different strategies in order to re-train classifiers, when new labeled data become available, are presented in a multi-expert scenario. The first method is the use of the entire new dataset. The second one is related to the consideration that each single classifier is able to select new samples starting from those on which it performs a miss- classification. Finally, by inspecting the multi expert system behavior, a sample misclassified by an expert, is used to update that classifier only if it produces a miss-classification by the ensemble of classifiers. This paper provides a comparison of three approaches under different conditions on two state of the art classifiers (SVM and Naive Bayes) by taking into account four different combination techniques. Experiments have been performed by considering the CEDAR (handwritten digit) database. It is shown how results depend by the amount of the new training samples, as well as by the specific combination decision schema and by classifiers in the ensemble.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/69653
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? ND
social impact