Recurrent neural networks are attracting considerable interest within the neural network domain especially because of their potential in such problems as pattern completion and temporal sequence processing (Almeida, 1987; Hertz et al., 1991). As for feed-forward networks, in virtually all problems of interest the proper number of hidden units is not known in advance, and usually this turns out to be a trade-off between generalization and learning abilities (Hertz et al., 1991). One popular way of solving this problem involves training an over-dimensioned network and then pruning excessive units (Sietsma and Dow, 1988). In this paper we propose a method of pruning a recurrent neural network, which is a generalization of an algorithm previously developed for feed-forward architectures (Pelillo and Fanelli, 1993; Castellano et al., 1993). The method is based on the idea of removing hidden units and adjusting the remaining weights in such a way that the overall input-output network's behavior is kept approximately unchanged over the entire training set. Experiments demonstrate the effectiveness of our approach.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
Titolo: | Pruning in recurrent neural networks |
Autori: | |
Data di pubblicazione: | 1994 |
Handle: | http://hdl.handle.net/11586/114512 |
Appare nelle tipologie: | 4.1 Contributo in Atti di convegno |