Publications

TAKAHASHI Takashi
Department of Applied Mathematics and Informatics,
Faculty of Science and Technology,
Ryukoku University
Last modified: Wed Jan 7 22:13:05 2004

Submitted and Accepted | Journal Paper | Conference Proceedings

Japanese| English


Submitted or Accepted


  1. Takashi TAKAHASHI, Takio KURITA and Yukifumi IKEDA
    ``A Neural Network Classifier with Preprocessing to Correct Outliers in an Image''
    Transactions of the IEICE vol.J??-D-II no.?? pp.??-??, (accepted) (in Japanese)

up


Journal Paper


  1. Takio KURITA and Takashi TAKAHASHI,
    ``Viewpoint Independent Face Recognition by Competition of the Viewpoint Dependent Classifiers''
    Neurocomputing, no.51, pp.181--195, 2003.


  2. H. You, T. Takahashi, Y. Hasegawa, and R. Tokunaga,
    ``Improving LIFS Image Coding via Extended Condensations,''
    Systems and Computers in Japan, vol.30, no.5, pp.1--8, 1999.


  3. H. You, T. Takahashi, H. Koono, and R. Tokunaga,
    ``Improving LIFS Image Coding by Using Extended Condensations and Gram-Schmidt Orthogonalization,''
    Transactions of the IEICE vol.J81-D-II no.12 pp.2731-2737, 1998 (in Japanese)

    PDF version(in IEICE.org)
  4. H. You, T. Takahashi, Y. Hasegawa, and R. Tokunaga,
    ``Improving LIFS Image Coding via Extended Condensations,''
    Transactions of the IEICE vol.J81-D-II no.7 pp.1576-1583, 1998 (in Japanese)

    PDF version(in IEICE.org)
  5. T. Takahashi and R. Tokunaga
    ``A Fast Computation Algorithm for Predicting AC Components of Images Using Mean Values of Blocks''
    Transactions of the IEICE vol.J81-D-II no.4 pp.778-780, 1998 (in Japanese)

    This paper discusses an algorithm to predict AC components of block segmented images only from mean values of adjacent blocks. The algorithm needs only operations of addition, subtraction, and bit shift for integer values. It provides substantial reduction of computational costs without a decline of prediction performance.

    gzipped ps version, PDF version (in IEICE.org)
  6. T. Takahashi and R. Tokunaga
    ``Removing redundancy of multi-layer perceptron using superposed energy function''
    Transactions of the IEICE vol.J80-D-II no.9 pp.2532-2540, 1997 (in Japanese)

    This paper discusses an energy function to remove a redundancy of multi-layer perceptron. The error function is given by a superposition of conventional error functions using different number of hidden units, and called superposed energy function. It is shown by numerical experiments that a less redundant and clear internal representation is acquired and hidden units are arranged in their order of contributions by the error back propagation method using superposed energy function. The validity for improving generalization performance is also discussed.

    gzipped ps version, PDF version (in IEICE.org)
  7. T. Takahashi, R. Tokunaga and Y. Hirai
    ``On supervised learning algorithm of three-layer linear perceptron --- An extension of Baldi-Hornik's theorem''
    Transactions of the IEICE vol.J80-D-II no.5 pp.1267-1275, 1997 (in Japanese)

    This paper discusses a learning rule for multi-layer perceptron which generates an approximation of identity mapping on a set of input data. Baldi-Hornik's theorem guarantees that three-layer linear perceptron trained by the ordinary error back propagation method gives a mapping equivalent to the Karhunen-Lo\`eve(KL) transformation. But the internal representation is not identical to the set of coefficients of the KL transformation, therefore it is difficult to estimate the contribution of each intermediate unit. This paper proposes a learning rule which gives an internal representation exactly corresponds to the set of KL-coefficients, and it is proved by extension of Baldi-Hornik's theorem. It is also shown by numerical experiments that this learning rule is valid for multi-layer nonlinear perceptron.

    gzipped ps version, PDF version (in IEICE.org)
  8. T. Takahashi and Y. Hirai,
    ``Self-organization of spatio-temporal visual receptive fields,''
    IEICE Transactions on Information and Systems, vol.E79-D, no.7, pp.980--989, 1996.

    A self-organizing neural network model of spatio-temporal visual receptive fields is proposed. It consists of a one-layer linear learning network with multiple temporal input channels, and each temporal channel has different impulse response. Every weight of the learning network is modified according to a Hebb-type learning algorithm proposed by Sanger. It is shown by simulation studies that various types of spatio-temporal receptive fields are self-organized by the network with random noise inputs. Some of them have similar response characteristics to X- and Y-type cells found in mammalian retina. The properties of receptive fields obtained by the network are analyzed theoretically. It is shown that only circularly symmetric receptive fields change their spatio-temporal characteristics depending on the bias of inputs. In particular, when the inputs are non-zero mean, the temporal properties of center-surround type receptive fields become heterogeneous and alter depending on the positions in the receptive fields.

up


Conference Proceedings


  1. M. KOBAYASHI and T. TAKAHASHI
    ``A subspace-based face detection method using image-size reduction and magnification''
    In Proc. of International Symposium on Artificial Life and Robotics (AROB 9th), Beppu, Oita, (accepted)


  2. T. Kurita, M. Pic, and T. TAKAHASHI
    ``Recognition and detection of occluded faces by a neural network classifier with recursive data reconstruction''
    In Proc. of IEEE Conf. on Advanced Video and Signal Based Surveillance (AVSS2003), Miami, Florida, 21-22 July, pp.53-58, 2003.


  3. Takashi TAKAHASHI and Takio KURITA
    ``Robust De-Noising by Kernel PCA''
    International Conference on Artificial Neural Networks(ICANN2002),
    In Artificial Neural Networks - ICANN2002, Springer, pp. 739-744, 2002

    Recently, kernel Principal Component Analysis is becoming a popular technique for feature extraction. It enables us to extract nonlinear features and therefore performs as a powerful preprocessing step for classification. There is one drawback, however, that extracted feature components are sensitive to outliers contained in data. This is a characteristic common to all PCA-based techniques. In this paper, we propose a method which is able to remove outliers in data vectors and replace them with the estimated values via kernel PCA. By repeating this process several times, we can get the feature components less affected with outliers. We apply this method to a set of face image data and confirm its validity for a recognition task.


  4. Takio KURITA, Takashi TAKAHASHI and Yukifumi IKEDA,
    ``A Neural Network Classifier for Occluded Images''
    Proc. of International Conference on Pattern Recognition (ICPR2002)
    vol.III, pp.45-48, 2002

    This paper proposes a neural network classifier which can automatically detect the occluded regions in the given image and replace that regions with the estimated values. An auto-associative memory is used to detect outliers such as pixels in the occluded regions. Certainties of each pixels are estimated by comparing the input pixels with the outputs of the auto-associative memory. The input values to the associative memory are replaced with the new values which are defined depending on the certainties. By repeating this process, we can get an image in which the pixel values of the occluded regions are replaced with the estimates. The proposed classifier is designed by integrating this associative memory with a simple classifier.

    PDF version
  5. Kenji NISHIDA, Takashi TAKAHASHI and Takio KURITA,
    ``A Topographic Kernel-based Regression Method''
    International Conference on Computational Intelligence and Neuroscience(ICCIN2002), 2002

    This paper proposes a topographic kernel-based regression method in which kernel bases are self-organized by the Kohonen's Self-Organizing Feature Map (SOM). By the clustering of the training samples using SOM, the number of kernel bases is restricted to the specified small value. It is also expected that the generalization ability of the network is improved by introducing the neighboring relations between the kernel bases. We also employ a top-down learning to modify the location of the bases to minimize the mean squared error because the location of the bases self-organized by SOM are not supposed to be optimum for the given regression task. Experimental results shows that the location of the kernel bases is automatically self-organized depending on both the local densities of the training samples and the local complexities of the target function.


  6. Takashi TAKAHASHI, Toshio TANAKA, Kenji NISHIDA and Takio KURITA,
    ``Self-Organization of Place Cells and Reward-Based Navigation for a Mobile Robot,''
    Proc. International Conference on Neural Information Processing(ICONIP2001), paper ID #251, 2001

    We investigate a method to navigate a mobile robot by using self-organizing map and reinforcement learning. Modeling hippocampal place cells, the map consists of units activated at specified locations in an environment. In order to adapt the map to a real-world environment, preferred locations of these units are self-organized by Kohonen's algorithm using the robot's actual position data. Then an actor-critic network is provided the position information from the self-organized map and trained to acquire goal-directed behavior of the robot. It is shown by simulation that the network successfully achieves the navigation avoiding obstacles.

    PDF version, presentation material


  7. Takashi TAKAHASHI and Takio KURITA,
    ``A self-organizing model of feature columns and face responsive neurons in the temporal cortex,''
    Proceedings of International Joint Conference on Neural Networks(IJCNN), pp. 82-87, 2001

    We investigate a self-organizing network model to account for the computational property of the inferotemporal cortex. The network can learn sparse codes for given data with organizing their topographic mapping. Simulation experiments are performed using real face images composed of different individuals at different viewing directions, and the results show that the network evolves the information representation which is consistent with some physiological findings. By analizing the characteristics of the neuron activities, it is also demonstrated that the present model self-organizes the efficient representation for coding both of the global structure and the finer information of the face images.

    PDF version
  8. Katsuya YASUMOTO, Takio KURITA and Takashi TAKAHASHI,
    ``Vision-based Recognition of the Hand Postures Using Self-Organizing Map and Linear Discriminant Analysis,''
    Proceedings of INTERACT2001, 2001
  9. T. Kurita, H. Shimai, T. Mishima and T. Takahashi
    ``Self-Organization of Viewpoint Dependent Face Representation by the Self-Supervised Learning and Viewpoint Independent Face Recognition by the Mixture of Classifiers,''
    Proceedings of IAPR Workshop on Machine Vision Applications(MVA2000), pp.319-322, 2000

    This paper proposes a viewpoint invariant face recognition method in which several viewpoint dependent classifiers are combined by a gating network. The gating network is designed as autoencoder with competitive hidden units. The viewpoint dependent representations of faces can be obtained by this autoencoder from many faces with different views. Multinomial logit model is used for the viewpoint dependent classifiers. By combining the classifiers with the gating network, the network can be self-organized such that one of the classifiers is selected depending on the viewpoint of a given input face image. Experimental results of view invariant face recognition are shown using the face images captured from different viewpoints.


  10. T. Takahashi, T. Kurita,
    ``Reconstructing optical flow generated by camera rotation via autoassociative learning,''
    Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks 2000(IJCNN2000), vol.IV, pp.279-283

    We investigate methods to reconstruct the optical flow generated by camera rotation using autoassociative learning. A multi-layer perceptron is trained to reduce the dimensionality of flow data which are obtained from real image sequences while the camera is rotating against static scenes. After this learning, the perceptron is able to produce reconstructions of the flow removing the noises in the original flow data. It is also shown that robustness of reconstruction for noisy data is improved by two changes: introduction of confidence values of optical flow into the error function and application of an additional data correction method.

    gzipped ps file
  11. T. Takahashi, and R. Tokunaga,
    ``Nonlinear Dimensionality Reduction by Multi Layer Perceptron Using Superposed Energy''
    Proceedings of 1999 International Symposium on Nonlinear Theory and its Applications(NOLTA99), vol. 2, pp. 863--866, 1999.

    We investigate an energy function for MLP called superposed energy. Applying to autoassociative learning of a sandglass-type MLP, it can adaptively adjust the effective number of the bottleneck-layer units to the intrinsic dimensionality of nonlinear data, and the optimal dimensionality reduced representation can be extracted after learning.

    gzipped ps file, presentation materials


  12. T. Takahashi and R. Tokunaga,
    ``Energy Functions for Efficient Nonlinear Dimensionality Reduction by Multi Layer Perceptrons,''
    Proceedings of the Fifth Internatinal Conference on Neural Information Processing (ICONIP'98), vol.1, pp.494-497, 1998.

    This paper investigates two simple energy functions which are valid for two different purposes of dimensionality reduction: feature extraction and data compression. These energy functions enable nonlinear perceptrons to organize data representations whose parameters, namely, outputs of the bottleneck layer units, are arranged in the order of their importance. The efficacy of these energy functions is shown by numerical experiments in comparison with conventional squared error functions and Principal Component Analysis.

    gzipped ps file
  13. T. Takahashi and R. Tokunaga,
    ``Removing the Redundancy of Perceptrons in Terms of a Simple Energy Function,''
    Progress in Connectionist-Based Information Systems, Proceedings of the 1997 Internatinal Conference on Neural Information Processing and Intelligent Information Systems (ICONIP'97), vol.1, pp.271-274, 1997.

    This paper reports a simple energy function, called ``superposed energy,'' which removes the redundancy of a perceptron by organizing its internal representation in the order of the contribution of hidden units. In the case of self-supervised learning of a three-layer linear perceptron, we show that each hidden unit activity exactly corresponds to the principal component of training data. We also investigate its validity for the learning of nonlinear perceptrons. Applying it to data compression and function approximation, it is shown that superposed energy removes the redundancy of internal representation and improves the generalization performance.


  14. T. Takahashi and Y. Hirai,
    ``Self-organization of spatio-temporal receptive fields,''
    Proceedings of the Internatinal Conference on Neural Information Processing '94 (ICONIP'94), vol.2, pp.960-965, 1994.

    A model of self-organizing spatio-temporal receptive fields is proposed. It consists of a one-layer feed forward network with multiple delay channels. Every weight of the network is modified according to Hebb-type learning algorithm proposed by Sanger. The network is trained with random Gaussian noise inputs with nonzero mean. It is shown that a variety of spatio-temporal receptive fields are acquired by this network. Some of them have similar properties to visual neurons found in mammalian retina, especially to X- and Y-type ganglion cells.

up


Submitted | Journal Paper | Conference Proceedings


TAKAHASHI Takashi