Learning Representations by Back-Propagating Error (1986)
AI Paper Podcasts AI Paper Podcasts
109 subscribers
34 views
1

 Published On Oct 6, 2024

Title: Learning representations by back-propagating errors
Link: nature.com/articles/323533a0
Authors: David E. Rumelhart, Geoffrey E. Hinton & Ronald J. Williams
Date: 9 October 1986

Summary

This is a classic paper on back-propagation, a technique for training artificial neural networks. The authors introduce the back-propagation algorithm, which repeatedly adjusts the weights of connections in a network to minimize the difference between the network's output and desired output. The paper highlights how back-propagation enables the creation of internal “hidden” units that represent important features of the task domain, leading to more complex learning capabilities than earlier methods. The authors discuss the algorithm's implementation and its limitations, including the potential for local minima in the error surface, and explore its application to tasks such as symmetry detection and information storage. The paper concludes by suggesting that back-propagation, although not a perfect model of biological learning, provides a valuable framework for understanding how internal representations can be learned in neural networks.

Key Topics

Backpropagation Algorithm, Multilayer Neural Networks, Internal Representations, Gradient Descent

show more

Share/Embed