文档介绍:Feedforward Neural
Network Methodology
Terrence L. Fine
Springer
Preface
The decade prior to publication has seen an explosive growth pu-
tational speed and memory and a rapid enrichment in our understand-
ing of artificial works. These two factors have cooperated to at
last provide systems engineers and statisticians with a working, practi-
cal, and essful ability to routinely make plex, nonlinear
models of such ill-understood phenomena as physical, economic, social,
and information-based time series and signals and of the patterns hid-
den in high-dimensional data. The models are based closely on the data
itself and require only little prior understanding of the stochastic mecha-
nisms underlying these phenomena. Among these models, the feedforward
works, also called multilayer perceptrons, have lent themselves
to the design of the widest range of essful forecasters, pattern classi-
fiers, controllers, and sensors. In a number of problems in optical character
recognition and medical diagnostics, such systems provide state-of-the-art
performance and such performance is also expected in speech recognition
applications. The essful application of feedforward works to
time series forecasting has been multiply demonstrated and quite visibly so
in the formation of market funds in which investment decisions are based
largely on work–based forecasts of performance.
The purpose of this monograph, plished by exposing the method-
ology driving these developments, is to enable you to engage in these ap-
plications and, by being brought to several research frontiers, to advance
the methodology itself. The focus on feedforward works was also
chosen to enable a coherent, thorough presentation of much of what is cur-
vi
rently known—the rapid state of advancement of this subject precludes a
comprehensive and final disposition.
Chapter 1 provides some of the historical background, a rapid survey of
essful applications, a transition from neurobiolog