*''Removing the Redundancy of Perceptrons in Terms of a Simple Energy Function''

**T. Takahashi and R. Tokunaga,

Progress in Connectionist-Based Information Systems, Proceedings of the 1997 Internatinal Conference on Neural Information Processing and Intelligent Information Systems (ICONIP'97), vol.1, pp.271-274, 1997.

**Abstract

This paper reports a simple energy function, called ``superposed energy,'' which removes the redundancy of a perceptron by organizing its internal representation in the order of the contribution of hidden units. In the case of self-supervised learning of a three-layer linear perceptron, we show that each hidden unit activity exactly corresponds to the principal component of training data. We also investigate its validity for the learning of nonlinear perceptrons. Applying it to data compression and function approximation, it is shown that superposed energy removes the redundancy of internal representation and improves the generalization performance.

トップ   編集 差分 履歴 添付 複製 名前変更 リロード   新規 一覧 検索 最終更新   ヘルプ   最終更新のRSS