Principal Component Extraction via Hebbian-Type Learning Rules

by Ahmad Masadeh and Mohamad Hassoun (May 1998)

This applet implements three different learning rules for extracting the major principal component for an arbitrary zero-mean distribution. These rules are Normalized Hebbian rule, Oja's rule, and Yullie et al rule as discribed by Equations (3.3.5), (3.3.6), and (3.3.7) on page 92 in Mohamad Hassoun's textbook.

Instructions

1. To start, create an initial distribution of points (i.e., training set) by clicking on the "Input Distribution" button and then clicking the mouse at different places in the input window to construct the distribution of choice. Every time the mouse is clicked a 100 point Gaussian distribution is inseted centered at the input plane pixel you click on. An arbitrary distribution may then be conctructed by repeating the point and click procedure just described. The learning rules used here to extract the major principal component (dominant eigenvector direction) assume that the input distribution has zero mean. This can be accomplished by making sure that the training set of points the user inserts is centered at the origin. More points may be added after the initial training session as to increase the complexity or change the shape of the input distribution.
2. The next step is for the user to specify an initial weight vector. This is accomplished by clicking on the "Initial State" button and then pointing the mouse and clicking on the desirable point in the input window. The initial weight state is designated by a small black square. The "Initial State" button can also be used to reset the starting state (weight vector) after an initial phase of training. This retains the training set but erases the current wieght vector trajectory, jsut after a new initial weight vector is inserted in the input plane. At this point, the user may experiment with different initial states and/or change the learning rule and/or other parameters.
3. Clicking "Train" invokes the learning rule for extracting the principal eigenvector. This is displayed as a wieght vector trajectory that is superimposed on the input plane (discrete trajectory of black points). The final weight vector (designated by a small green box) is arrived at after a sufficient number of cycles. On the order of 1000 cycles should be sufficient for most cases. The line connecting the origin to this small green square is the final weigh vector and it points in the direction (or the opposite direction) of the dominant eigenvector of the input distribution. The learning rate and training cycles may be altered by the user; however, the default values should work for most problems.
4. The "Project Data" button displays the projection of the training points onto a line in the direction of the extracted eigenvector.
5. "Clear" clears the input plane and allows the user to experiment with new input distributions.