Progress in Connectionist-Based Information Systems, Proceedings of the 1997 Internatinal Conference on Neural Information Processing and Intelligent Information Systems (ICONIP'97), vol.1, pp.271-274, 1997.
This paper reports a simple energy function, called ``superposed energy,'' which removes the redundancy of a perceptron by organizing its internal representation in the order of the contribution of hidden units. In the case of self-supervised learning of a three-layer linear perceptron, we show that each hidden unit activity exactly corresponds to the principal component of training data. We also investigate its validity for the learning of nonlinear perceptrons. Applying it to data compression and function approximation, it is shown that superposed energy removes the redundancy of internal representation and improves the generalization performance.