Improving ANN Generalization via Self-Organized Flocking in conjunction with Multitasked Backpropagation

No Thumbnail Available

Date

2003-04-14

Journal Title

Series/Report No.

Journal ISSN

Volume Title

Publisher

Abstract

The purpose of this research has been to develop methods of improving the generalization capabilities of artificial neural networks. Tools for examining the influence of individual training set patterns on the learning abilities of individual neurons are put forth and utilized in the implementation of new network learning algorithms. Algorithms are based largely on the supervised training algorithm: backpropagation, and all experiments use the standard backpropagation algorithm for comparison of results. The focus of the new learning algorithms revolve around the addition of two main components. The first addition is that of an unsupervised learning algorithm called flocking. Flocking attempts to provide network hyperplane divisions that are evenly influenced by examples on either side of the hyperplane. The second addition is that of a multi-tasking approach called convergence training. Convergence training uses the information provided by a clustering algorithm in order to create subtasks that represent the divisions between clusters. These subtasks are then trained in unison in order to promote hyperplane sharing within the problem space. Generalization was improved in most cases and the solutions produced by the new learning algorithms are demonstrated to be very robust against different random weight initializations. This research is not only a search for better generalizing ANN learning algorithms, but also a search for better understanding when dealing with the complexities involved in ANN generalization.

Description

Keywords

generalization, learning algorithms, flocking, self-organizing, mulit-task training, artificial neural networks

Citation

Degree

MS

Discipline

Computer Engineering

Collections