In many real-world applications, the characteristics of data collected by activity logs, sensors and mobile devices change over time. This behavior is known as concept drift. In complex environments, which produce high dimensional data streams, machine learning tasks become cumbersome, as models become outdated very quickly. In our study, we assess hundreds of combinations of data characteristics and methods on network traffic data. Specifically, we focus on seven conventional machine learning and deep learning methods and compare their generalization power in the presence of concept drift. Our results show that Convolutional Neural Networks (CNNs) outperform conventional methods, even when compared to an idealized upper bound on their performance created in a piecewise manner by selecting the best method and its best configuration at each point in time, thus mimicking the output of a perfect meta-learning architecture. In the context of sequential data subject to concept drift, our results appear to defy the usually accepted 'No Free Lunch Theorem (NFL)', which stipulates that no method dominates all the others in every situation. While this is by no means a rejection of the NFL Theorem, which captures a much more complex phenomenon, it is nonetheless a surprising result worth further investigations. As a matter of fact, our results show that, when data availability is limited, a meta-learning approach is preferable to CNNs, as it requires less data for training.
Deep learning versus conventional learning in data streams with concept drifts
Corizzo R.
;
2019-01-01
Abstract
In many real-world applications, the characteristics of data collected by activity logs, sensors and mobile devices change over time. This behavior is known as concept drift. In complex environments, which produce high dimensional data streams, machine learning tasks become cumbersome, as models become outdated very quickly. In our study, we assess hundreds of combinations of data characteristics and methods on network traffic data. Specifically, we focus on seven conventional machine learning and deep learning methods and compare their generalization power in the presence of concept drift. Our results show that Convolutional Neural Networks (CNNs) outperform conventional methods, even when compared to an idealized upper bound on their performance created in a piecewise manner by selecting the best method and its best configuration at each point in time, thus mimicking the output of a perfect meta-learning architecture. In the context of sequential data subject to concept drift, our results appear to defy the usually accepted 'No Free Lunch Theorem (NFL)', which stipulates that no method dominates all the others in every situation. While this is by no means a rejection of the NFL Theorem, which captures a much more complex phenomenon, it is nonetheless a surprising result worth further investigations. As a matter of fact, our results show that, when data availability is limited, a meta-learning approach is preferable to CNNs, as it requires less data for training.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.