In many real-world applications, the characteristics of data collected by activity logs, sensors and mobile devices change over time. This behavior is known as concept drift. In complex environments, which produce high dimensional data streams, machine learning tasks become cumbersome, as models become outdated very quickly. In our study, we assess hundreds of combinations of data characteristics and methods on network traffic data. Specifically, we focus on seven conventional machine learning and deep learning methods and compare their generalization power in the presence of concept drift. Our results show that Convolutional Neural Networks (CNNs) outperform conventional methods, even when compared to an idealized upper bound on their performance created in a piecewise manner by selecting the best method and its best configuration at each point in time, thus mimicking the output of a perfect meta-learning architecture. In the context of sequential data subject to concept drift, our results appear to defy the usually accepted 'No Free Lunch Theorem (NFL)', which stipulates that no method dominates all the others in every situation. While this is by no means a rejection of the NFL Theorem, which captures a much more complex phenomenon, it is nonetheless a surprising result worth further investigations. As a matter of fact, our results show that, when data availability is limited, a meta-learning approach is preferable to CNNs, as it requires less data for training.

Deep learning versus conventional learning in data streams with concept drifts

Corizzo R.
;
2019-01-01

Abstract

In many real-world applications, the characteristics of data collected by activity logs, sensors and mobile devices change over time. This behavior is known as concept drift. In complex environments, which produce high dimensional data streams, machine learning tasks become cumbersome, as models become outdated very quickly. In our study, we assess hundreds of combinations of data characteristics and methods on network traffic data. Specifically, we focus on seven conventional machine learning and deep learning methods and compare their generalization power in the presence of concept drift. Our results show that Convolutional Neural Networks (CNNs) outperform conventional methods, even when compared to an idealized upper bound on their performance created in a piecewise manner by selecting the best method and its best configuration at each point in time, thus mimicking the output of a perfect meta-learning architecture. In the context of sequential data subject to concept drift, our results appear to defy the usually accepted 'No Free Lunch Theorem (NFL)', which stipulates that no method dominates all the others in every situation. While this is by no means a rejection of the NFL Theorem, which captures a much more complex phenomenon, it is nonetheless a surprising result worth further investigations. As a matter of fact, our results show that, when data availability is limited, a meta-learning approach is preferable to CNNs, as it requires less data for training.
2019
978-1-7281-4550-1
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/373837
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? ND
social impact