ISBN-13: 9783330344518 / Angielski / Miękka / 2017 / 136 str.
The concept of ensemble learning has become exceptionally popular over the last couple decades due to the ability of a group of base classifiers trained for the same problem to often demonstrate higher accuracy than that of a single model. The main idea behind such an ensemble of models, which outperforms a single model, is to combine a set of diverse classifiers. This work concentrates on neural networks as base classifiers and explores the influence of the parameters of neural networks, whose randomization leads to generating diverse ensembles with better generalisation ability compared to a single model. For stimulating disagreement among the members of an ensemble of neural networks, we apply the sampling strategy similar to one implemented by Random Forests together with the variation of the network parameters. Experimental results demonstrate that by random varying different network parameters it is possible to induce diversity to an ensemble of neural networks, but it does not necessarily lead to an accuracy improvement. This work will be useful for people who are interested in ensemble methods and Artificial Neural Networks as a base classifier.