Hsien-Chung Wei, Yung-Ching Chang and Jia-Shung Wang
Institute of Computer Science
National Tsing Hua University
Hsinchu, Taiwan 300, R.O.C.
Vector quantization (VQ) has been used in image compression for minimizing the number of bits required to represent and transmit an image. The Kohonen Neural Network (KNN), also known as the Kohonen self-organizing feature map, creates a vector quantizer by adjusting weights from input nodes to output nodes. Using this feature map, we obtain a codebook and also the neighborhood relations between the codewords of this codebook. Based on these neighborhood relations, we can create a structured codebook to improve the search time and/or bit rates. However, there is an instrinsic problem in applying KNN directly. When the codebook size increases, the coding performance is not uniformly good. To overcome this problem, we propose a modified version of KNN, called Hierarchical KNN (HKNN). In this algorithm we first produce a small codebook, and then increase the codewords by using a splitting process. Using this method, the image "lenna" (size 512¡Ñ512) can be coded at 0.5 bpp with PSNR 32.197 dB. Furthermore present an adaptive VQ scheme, Adaptive HKNN, for image sequence coding. According to our experimental results, the improvement due to Adaptive HKNN can be up to 2.5 dB with 0.16 bpp transmission overhead for the image sequence "claire".
Keywords: image compression, vector quantization, Kohonen neural network, structured codebook, adaptive algorithm
Received February 4, 1993; revised April 9, 1993.
Communicated by Soo-Chang Pei.