Jian-Xiong Wu and Chorkin Chan
Department of Computer Science
University of Hong Kong
Based on the principle of maximizing the likelihood of proper classification of training samples, an algorithm is proposed to train the artifical neural pattern density estimator intoroduced in . In the study, previous restrictions on unit functions were relaxed such that each unit in the network represented a joint density of independent Gaussian variables with equal variances, while variances across densities did not have to be the same. The algorithm was tested with samples derived from known mixtures of memoryless Gaussian sources as well as exponential and Gamma densities. Both one and two dimensional cases were explored. The success of the network in estimating the p.d.f.'s depended on how well they were represented by the training samples, the number of hidden units employed and how thoroughly the network was trained. Samples from two mixtures corresponding to two classes were used to test the network capability as a classifier by comparing its recognition rates against that of a Bayes classifier.
Keywords: feed-forward network, kernel functions, maximum likelihood estimation, density estimation, Bayes classifier
Received December 21, 1989; revised November 17, 1990.
Communicated by Lin-Shan Lee.
*The content of this paper also appeared in IEEE TENCON 90, Hong Kong, 24-27 September, 1990.
The conference is organized by the IEEE Hong Kong Section.