**2. COMPUTATIONAL CAPABILITIES
OF ARTIFICIAL NEURAL NETWORKS**

**2.0 Introduction**

In the previous chapter, the computational capabilities
of single LTG's and PTG's were investigated. In this chapter, networks
of LTG's are considered and their mapping capabilities are investigated.
The function approximation capabilities of networks of units (artificial
neurons) with continuous nonlinear activation functions are also investigated.
In particular, some important theoretical results on the approximation
of arbitrary multivariate continuous functions by feedforward multilayer
neural networks are presented. This chapter concludes with a brief section
on neural network computational complexity and the efficiency of neural
network hardware implementation. In the remainder of this book, the terms
artificial neural network, neural network, network, and net will be used
interchangeably, unless noted otherwise.

Before proceeding any further, note that the *n*-input
PTG(*r*) of Chapter One can be considered as a form of a neural network
with a "fixed" preprocessing (hidden) layer feeding into a single
LTG in its output layer, as was shown in Figure 1.3.4. Furthermore, Theorems
1.3.1 (extended to multivariate functions) and 1.2.1 establish the "universal"
realization capability of this architecture for continuous functions of
the form (assuming that the
output unit has a linear activation function) and for Boolean functions
of the form , respectively.
Here, universality means that the approximation of an arbitrary continuous
function can be made to any degree of accuracy. Note that for continuous
functions, the order *r* of the PTG may become very large. On the
other hand, for Boolean functions, universality means that the realization
is exact. Here, *r* *n* is sufficient. The following
sections consider other more interesting neural net architectures and present
important results on their computational capabilities.