**4. MATHEMATICAL THEORY
OF**

**NEURAL LEARNING**

**4.0 Introduction**

This chapter deals with theoretical aspects of learning
in artificial neural networks. It investigates mathematically,
the nature and stability of the asymptotic solutions obtained
using the basic supervised, Hebbian and reinforcement learning
rules, which were introduced in the previous chapter. Formal
analysis is also given for simple competitive learning and self-organizing
feature map learning.

A unifying framework for the characterization of
various learning rules is presented. This framework is based
on the notion that learning in general neural networks can be
viewed as search, in a multidimensional space, for a solution
which optimizes a prespecified criterion function, with or without
constraints. Under this framework, a continuous-time learning
rule is viewed as a first-order, stochastic differential equation/dynamical
system, whereby the state of the system evolves so as to minimize
an associated instantaneous criterion function. Approximation
techniques are employed to determine, in an average sense, the
nature of the asymptotic solutions of the stochastic system.
This approximation leads to an "average learning equation"
which, in most cases, can be cast as a globally, asymptotically
stable gradient system whose stable equilibria are minimizers
of a well-defined average criterion function. Finally, and subject
to certain assumptions, these stable equilibria can be taken as
the possible limits (attractor states) of the stochastic learning
equation.

The chapter also treats two important issues associated
with learning in a general feedforward neural network. These
are learning generalization and learning complexity. The section
on generalization presents a theoretical method for calculating
the asymptotic probability of correct generalization of a neural
network as a function of the training set size and the number
of free parameters in the network. Here, generalization in deterministic
and stochastic nets are investigated. The chapter concludes by
reviewing some significant results on the complexity of learning
in neural networks.

Goto [4.1] [4.2] [4.3] [4.4] [4.5] [4.6] [4.7] [4.8] [4.9] [4.10]

Back to the Table of Contents

Back to Main Menu