This work inspects deep learning architectures and shallow learning techniques to determine whether the image of a fingerprint is real (Live) or not (Fake). It is known that Deep Learning techniques deliver, in general, good accuracies being able to automatically extract relevant patterns, at the same time, it is also known that these algorithms require large amounts of data. For this reason, transfer learning aims to transfer the knowledge learnt over a huge dataset to a new, smaller dataset. In this work, because of the limited size of the LivDet2019 dataset, three well known deep learning architectures such as Inception V3, ResNet50 and NASNet Large have been modified to perform transfer learning from the huge imagenet dataset to the smaller LivDet2019. The hypothesis at the very basis of this work is that the deep learning architectures trained on the huge imagenet dataset would learn to extract relevant patterns like lines, shapes, curves, jump between curves, etc… Later, the extracted knowledge, is fine-tuned on the LivDet2019 dataset to recognize fingerprint minuities as a non-linear combination of the previously learned patterns. For sake of completeness, state of art shallow learning image descriptors, finetuned for fingerprint recognition, such as Binarized Statistical Image Features (BSIF), Local Phase Quantization (LPQ) and Weber Local Descriptor (WLD) are used for extracting features from the LivDet2019 dataset. The classification on each of these extracted features is performed both with a linear and non-linear (gaussian) support vector machine. Accuracies suggest that both shallow learning and deep learning techniques are on par with the accuracies of reviewed works and thus transfer learning in fingerprint liveness detection is a feasible strategy that deserve attention and future research with the aim of increasing fingerprint detection accuracies.

A comparative study of shallow learning and deep transfer learning techniques for accurate fingerprints vitality detection

Donato Impedovo;Vincenzo Dentamaro
;
Giacomo Abbattista;Vincenzo Gattulli;Giuseppe Pirlo
2021-01-01

Abstract

This work inspects deep learning architectures and shallow learning techniques to determine whether the image of a fingerprint is real (Live) or not (Fake). It is known that Deep Learning techniques deliver, in general, good accuracies being able to automatically extract relevant patterns, at the same time, it is also known that these algorithms require large amounts of data. For this reason, transfer learning aims to transfer the knowledge learnt over a huge dataset to a new, smaller dataset. In this work, because of the limited size of the LivDet2019 dataset, three well known deep learning architectures such as Inception V3, ResNet50 and NASNet Large have been modified to perform transfer learning from the huge imagenet dataset to the smaller LivDet2019. The hypothesis at the very basis of this work is that the deep learning architectures trained on the huge imagenet dataset would learn to extract relevant patterns like lines, shapes, curves, jump between curves, etc… Later, the extracted knowledge, is fine-tuned on the LivDet2019 dataset to recognize fingerprint minuities as a non-linear combination of the previously learned patterns. For sake of completeness, state of art shallow learning image descriptors, finetuned for fingerprint recognition, such as Binarized Statistical Image Features (BSIF), Local Phase Quantization (LPQ) and Weber Local Descriptor (WLD) are used for extracting features from the LivDet2019 dataset. The classification on each of these extracted features is performed both with a linear and non-linear (gaussian) support vector machine. Accuracies suggest that both shallow learning and deep learning techniques are on par with the accuracies of reviewed works and thus transfer learning in fingerprint liveness detection is a feasible strategy that deserve attention and future research with the aim of increasing fingerprint detection accuracies.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/419140
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 6
social impact