This paper addresses the problem of multiclassifier system evaluation by artificially generated classifiers. For the purpose, a new technique is presented for the generation of sets of artificial abstract-level classifiers with different characteristics at the individual-level (i.e. recognition performance) and at the collective-level (i.e. degree of similarity). The technique has been used to generate sets of classifiers simulating different working conditions in which the performance of combination methods can be estimated. The experimental tests demonstrate the effectiveness of the approach in generating simulated data useful to investigate the performance of combination methods for abstract-level classifiers.
Generating Sets of Classifiers for the Evaluation of Multi-expert Systems
IMPEDOVO, DONATO;PIRLO, Giuseppe
2010-01-01
Abstract
This paper addresses the problem of multiclassifier system evaluation by artificially generated classifiers. For the purpose, a new technique is presented for the generation of sets of artificial abstract-level classifiers with different characteristics at the individual-level (i.e. recognition performance) and at the collective-level (i.e. degree of similarity). The technique has been used to generate sets of classifiers simulating different working conditions in which the performance of combination methods can be estimated. The experimental tests demonstrate the effectiveness of the approach in generating simulated data useful to investigate the performance of combination methods for abstract-level classifiers.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.