3.0 Introduction

One of the most significant attributes of a neural network is its ability to learn by interacting with its environment or with an information source. Learning in a neural network is normally accomplished through an adaptive procedure, known as a learning rule or algorithm whereby the weights of the network are incrementally adjusted so as to improve a predefined performance measure over time.

In the context of artificial neural networks, the process of learning is best viewed as an optimization process. More precisely, the learning process can be viewed as "search" in a multi-dimensional parameter (weight) space for a solution, which gradually optimizes a prespecified objective (criterion) function. This view is adopted in this chapter, and it allows us to unify a wide range of existing learning rules, which otherwise would have looked more like a diverse variety of learning procedures.

This chapter presents a number of basic learning rules for supervised, reinforced, and unsupervised learning tasks. In supervised learning (also known as learning with a teacher or associative learning), each input pattern/signal received from the environment is associated with a specific desired target pattern. Usually, the weights are synthesized gradually, and at each step of the learning process they are updated so that the error between the network's output and a corresponding desired target is reduced. On the other hand, unsupervised learning involves the clustering of (or the detection of similarities among) unlabeled patterns of a given training set. The idea here is to optimize (maximize or minimize) some criterion or performance function defined in terms of the output activity of the units in the network. Here, the weights and the outputs of the network are usually expected to converge to representations which capture the statistical regularities of the input data. Reinforcement learning involves updating the network's weights in response to an "evaluative" teacher signal; this differs from supervised learning, where the teacher signal is the "correct answer". Reinforcement learning rules may be viewed as stochastic search mechanisms that attempt to maximize the probability of positive external reinforcement for a given training set.

In most cases, these learning rules are presented in the basic form appropriate for single unit training. Exceptions are cases involving unsupervised (competitive or feature mapping) learning schemes where an essential competition mechanism necessitates the use of multiple units. For such cases, simple single layer architectures are assumed. Later chapters of this book (Chapters 5, 6, and 7) extend some of the learning rules discussed here to networks with multiple units and multiple layers.

Goto [3.1] [3.2] [3.3] [3.4] [3.5] [3.6]

Back to the Table of Contents

Back to Main Menu