1st Edition

Backpropagation Theory, Architectures, and Applications

Edited By Yves Chauvin, David E. Rumelhart Copyright 1995
    576 Pages
    by Psychology Press

    574 Pages
    by Psychology Press

    Composed of three sections, this book presents the most popular training algorithm for neural networks: backpropagation. The first section presents the theory and principles behind backpropagation as seen from different perspectives such as statistics, machine learning, and dynamical systems. The second presents a number of network architectures that may be designed to match the general concepts of Parallel Distributed Processing with backpropagation learning. Finally, the third section shows how these principles can be applied to a number of different fields related to the cognitive sciences, including control, speech recognition, robotics, image processing, and cognitive psychology. The volume is designed to provide both a solid theoretical foundation and a set of examples that show the versatility of the concepts. Useful to experts in the field, it should also be most helpful to students seeking to understand the basic principles of connectionist learning and to engineers wanting to add neural networks in general -- and backpropagation in particular -- to their set of problem-solving methods.

    Contents: D.E. Rumelhart, R. Durbin, R. Golden, Y. Chauvin, Backpropagation: The Basic Theory. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, K.J. Lang, Phoneme Recognition Using Time-Delay Neural Networks. C. Schley, Y. Chauvin, V. Henkle, Automated Aircraft Flare and Touchdown Control Using Neural Networks. F.J. Pineda, Recurrent Backpropagation Networks. M.C. Mozer, A Focused Backpropagation Algorithm for Temporal Pattern Recognition. D.H. Nguyen, B. Widrow, Nonlinear Control with Neural Networks. M.I. Jordan, D.E. Rumelhart, Forward Models: Supervised Learning with a Distal Teacher. S.J. Hanson, Backpropagation: Some Comments and Variations. A. Cleeremans, D. Servan-Schreiber, J.L. McClelland, Graded State Machines: The Representation of Temporal Contingencies in Feedback Networks. S. Becker, G.E. Hinton, Spatial Coherence as an Internal Teacher for a Neural Network. J.R. Bachrach, M.C. Mozer, Connectionist Modeling and Control of Finite State Systems Given Partial State Information. P. Baldi, Y. Chauvin, K. Hornik, Backpropagation and Unsupervised Learning in Linear Networks. R.J. Williams, D. Zipser, Gradient-Based Learning Algorithms for Recurrent Networks and Their Computational Complexity. P. Baldi, Y. Chauvin, When Neural Networks Play Sherlock Homes. P. Baldi, Gradient Descent Learning Algorithms: A Unified Perspective.

    Biography

    Yves Chauvin, David E. Rumelhart