BACK PROPAGATION TRAINING ALGORITHM

Though several network architectures and training algorithms are available, the error back-propagation (BP) algorithm is by far the most popular in semiconductor manufacturing. Feed-forward neural networks trained by BP consist of several layers of simple procesing elements called neurons, interconnections, and weights that are assigned to those interconnections (see figure below). Each neuron contains the weighted sum of its inputs filtered by a sigmoidal (S-shaped) transfer function. The neurons are interconnected in such a way that information relevant to the I/O mapping is stored in the weights. The various layers of neurons in BP networks receive, process, and transmit information on the relationships between the input parameters and corresponding responses. Aside from the input and output layers, these networks incorporate one or more "hidden" layers of neurons which do not interact with the outside world, but assist in performing classification and nonlinear feature extraction tasks on information provided by the input and output laters.

In the BP learning algorithm, the network begins with a random set of weights. An input vector is fed forward through the network, and the output values are calculated using this initial weight set. Next, the calculated output is compared with the measured output data, and the squared difference between this pair of vectors determines the overall system error. The accumulated error for all of the input-output pairs is defined as the Euclidean distance in the weight space. The network attempts to minimize this distance using the gradient descent approach, in which the network weights are adjusted in the direction of decreasing error.