LYAPUNOV THEORY BASED ADAPTIVE LEARNING ALGORITHM FOR MULTILAYER NEURAL NETWORKS
Küçük Resim Yok
Tarih
2014
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
ACAD SCIENCES CZECH REPUBLIC, INST COMPUTER SCIENCE
Erişim Hakkı
info:eu-repo/semantics/openAccess
Özet
This paper presents a novel weight updating algorithm for training of multilayer neural network (MLNN). The MLNN system is first linearized and then the design procedure is proposed as an inequality constraint optimization problem. A well selected Lyapunov function is suitably determined and integrated into the constraint function for satisfying asymptotic stability in the sense of Lyapunov. Thus, the convergence capability of training algorithm is improved by using a new analytical adaptation gain rate which has the ability to adaptively adjust itself depending on a sequential square error rate. The proposed algorithm is compared with two types of backpropagation algorithms and a Lyapunov theory based MLNN algorithm on three benchmark problems which are XOR, 3-bit parity, and 8-3 encoder. The results are compared in terms of number of learning iterations and computational time required for a specified convergence rate. The results clearly indicate that the proposed algorithm is much faster in convergence than other three algorithms. The proposed algorithm is also comparatively tested on a real iris image database for multiple-input and multiple-output classification problem and the effect of adaptation gain rate for faster convergence and higher performance is verified.
Açıklama
Anahtar Kelimeler
Lyapunov stability theory, multilayer neural network, Lagrange multiplier theory, adaptive learning
Kaynak
NEURAL NETWORK WORLD
WoS Q Değeri
Q4
Scopus Q Değeri
Q4
Cilt
24
Sayı
6