文档介绍:IEEE TRANSACTIONS ON WORKS, VOL. 6, NO. 3, MAY 1995 669
Dynamic Learning Rate Optimization
of the Backpropagation Algorithm
Xiao-Hu Yu, Member, IEEE, Guo-An Chen, Student Member, IEEE, and Shi-Xin Cheng, Member, IEEE
Abstract-It has been observed by many authors that the of work puted forward from the input layer
backpropagation (BP) error surfaces usually consist of a large to the output layer. While in the second phase, the descent
amount of flat regions as well as extremely steep regions. As such, gradient is calculated in a BP fashion which makes it possible
the BP algorithm with a fixed learning rate will be low efficient.
This paper considers dynamic learning rate optimization of the to adjust the weights in a descent direction. This procedure is
BP algorithm using derivative information. An efficient method repeatedly performed for each training pattern until all error
of deriving the first and second derivatives of the objective signals between the desired and actual outputs are sufficiently
function with respect to the learning rate is explored, which small. For initialization of the learning, all the weights are set
does not involve explicit calculation of second-order derivatives with small random values.
in weight space, but rather uses the information gathered from
the forward and backward propagation. Several learning rate