Approximation Capabilities of Neural Networks

Abstract: Algorithm-based computers are programmed, i.e., there must be a set of rules which, a
priori, characterize the calculation that is implemented by the computer. Neural computation, based
on neural networks, solve problems by learning, they absorb experience, and modify their internal
structure in order to accomplish a given task. In the learning process, the available information is
usually divided into two categories, examples of function values or training data and prior
information, e.g. smoothness constraint, or other particular properties [3]. From the learning point of
view, the approximation of a function is equivalent with the learning problem of a neural network. In
this paper we want to show the capabilities of a neural network to approximate arbitrary continuous
functions and to build a practical neural network to approximate a continuous function. We have
made some experiments in order to confirm the theoretical results.

 

Scroll to Top