In real world applications, signature verification systems should be able to learn continuously, as new signatures providing additional information become available. In fact, new data are not equally relevant for system improvement and a suitable data filtering strategy is generally required. In this context, instance selection is an important task for signature verification systems in order to select useful signatures to be considered for updating system knowledge, removing irrelevant and/or redundant instances from new data. This paper proposes a new feedback-based learning strategy to update the knowledge-base in multi-expert signature verification system. In particular, the collective behavior of classifiers is considered to select the samples for updating system knowledge. Evaluation tests provide a comparison between our (not naïve) approach and the traditional approach, which uses the entire new dataset for feedback. For the purpose, two state-of-the-art classifiers (NB and k-NN) and two abstract level combination techniques (MV and WMV) were used. The experimental results, carried out considering the SUSig database, demonstrate the effectiveness of the new strategy.
INSTANCE SELECTION METHOD IN MULTI-EXPERT SYSTEM FOR ONLINE SIGNATURE VERIFICATION
PIRLO, Giuseppe;IMPEDOVO, DONATO
2014-01-01
Abstract
In real world applications, signature verification systems should be able to learn continuously, as new signatures providing additional information become available. In fact, new data are not equally relevant for system improvement and a suitable data filtering strategy is generally required. In this context, instance selection is an important task for signature verification systems in order to select useful signatures to be considered for updating system knowledge, removing irrelevant and/or redundant instances from new data. This paper proposes a new feedback-based learning strategy to update the knowledge-base in multi-expert signature verification system. In particular, the collective behavior of classifiers is considered to select the samples for updating system knowledge. Evaluation tests provide a comparison between our (not naïve) approach and the traditional approach, which uses the entire new dataset for feedback. For the purpose, two state-of-the-art classifiers (NB and k-NN) and two abstract level combination techniques (MV and WMV) were used. The experimental results, carried out considering the SUSig database, demonstrate the effectiveness of the new strategy.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.