文档介绍:COGNITIVE SCIENCE 9, 147-169 (1985)
A Learning Algorithm for
Boltzmann Machines*
DAVID H. ACKLEY
GEOFFREY E. HINTON
Computer Science Department
Carnegie-Mellon University
TERRENCE J. SEJNOWSKI
Biophysics Department
The Johns Hopkins University
putotionol power of massively works of simple processing
elements resides in munication bandwidth provided by the hardware
connections between elements. These connections con allow a significant
fraction of the knowledge of the system to be applied to an instance of a prob-
lem in o very short time. One kind putation for which massively porollel
networks appear to be well suited is large constraint satisfaction searches,
but to use the connections efficiently two conditions must be met: First, a
search technique that is suitable for works must be found. Second,
there must be some way of choosing internal representations which allow the
preexisting hardware connections to be used efficiently for encoding the con-
straints in the domain being searched. We describe a generol parallel search
method, based on statistical mechanics, and we show how it leads to a gen-
eral learning rule for modifying the connection strengths so as to incorporate
knowledge obout o task domain in on efficient way. We describe some simple
examples in which the learning algorithm creates internal representations
thot ore demonstrobly the most efficient way of using the preexisting connec-
tivity structure.
1. INTRODUCTION
Evidence about the architecture of the brain and the potential of the new
VLSI technology have led to a resurgence of interest in “connectionist” sys-
l The research reported here was supported by grants from the System Development
Foundation. We thank Peter Brown, Francis Crick, Mark Derthick, Scott Fahlman, Jerry
Feldman, Stuart Geman, Gail Gong, John Hopfield, Jay McClelland, Barak Pearlmutter,
Harry Printz, Dave Rumelhart, Tim Shallice