The problem of bug-fixing time can be treated as a supervised text categorization task in Natural Language Processing. In recent years, following the use of deep learning also in the field of Natural Language Processing, pre-trained contextualized representations of words have become widespread. One of the most used pre-trained language representations models is named Google BERT (hereinafter, for brevity, BERT). BERT uses a self-attention mechanism that allows learning the bidirectional context representation of a word in a sentence, which constitutes one of the main advantages over the previously proposed solutions. However, due to the large size of BERT, it is difficult for it to put it into production. To address this issue, a smaller, faster, cheaper and lighter version of BERT, named DistilBERT, has been introduced at the end of 2019. This paper compares the efficacy of BERT and DistilBERT, combined with the Logistic Regression, in predicting bug-fixing time from bug reports of a large-scale open-source software project, LiveCode. In the experimentation carried out, DistilBERT retains almost 100% of its language understanding capabilities and, in the best case, it is 63.28% faster than BERT. Moreover, with a not time-consuming tuning of the C parameter in Logistic Regression, the DistilBERT provides an accuracy value even better than BERT.
Predicting Bug-Fixing Time: DistilBERT Versus Google BERT
Pasquale Ardimento
2022-01-01
Abstract
The problem of bug-fixing time can be treated as a supervised text categorization task in Natural Language Processing. In recent years, following the use of deep learning also in the field of Natural Language Processing, pre-trained contextualized representations of words have become widespread. One of the most used pre-trained language representations models is named Google BERT (hereinafter, for brevity, BERT). BERT uses a self-attention mechanism that allows learning the bidirectional context representation of a word in a sentence, which constitutes one of the main advantages over the previously proposed solutions. However, due to the large size of BERT, it is difficult for it to put it into production. To address this issue, a smaller, faster, cheaper and lighter version of BERT, named DistilBERT, has been introduced at the end of 2019. This paper compares the efficacy of BERT and DistilBERT, combined with the Logistic Regression, in predicting bug-fixing time from bug reports of a large-scale open-source software project, LiveCode. In the experimentation carried out, DistilBERT retains almost 100% of its language understanding capabilities and, in the best case, it is 63.28% faster than BERT. Moreover, with a not time-consuming tuning of the C parameter in Logistic Regression, the DistilBERT provides an accuracy value even better than BERT.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.