Quantization Techniques



next up previous contents
Next: Universal and Adaptive Up: Lossy Compression Algorithms Previous: Wavelets

Quantization Techniques

Scalar quantization is well developed in terms of theory and application; the only flourishing research area is that of building cheaper and higher resolution quantizers for A/D conversion, and the theory of and practice of feedback quantization. Most activity in the theory and design of quantizers for lossy compression has focused on the design of quantization tables or bit-allocation algorithms for multiple scalar quantizers and on design algorithms for more general vector quantizers. In both cases the quantizers can be applied either directly to the signal or, more commonly, to a decomposed signal following a transform such as a wavelet transform or other preprocessing. When vector quantizers are used without signal decompositions the bit-rate versus image-quality performance can suffer, but extremely simple decompression algorithms involving only table lookup and no computation can be used. This provides an attractive alternative in applications such as decompressing video on a personal computer where high quality is not needed, but fast and simple software decompression is required.

Vector quantizer design algorithms have concentrated on quantizers with the geometrical structures called lattices and trellises and on quantizers designed using statistical clustering methods. Lattice vector quantizers and trellis coded quantizers have proved useful for medium rate speech compression and they have shown promise for low bit-rate image compression when used in cascade with wavelet decompositions, providing good quality images at under .25 bits per pixel. The clustering methods have the intriguing side benefit of resembling statistical classification algorithms and hence suggest the possibility of combining compression with other SP such as classification and segmentation. A variety of clustering algorithms have been reported in the literature, including ideas from neural nets, self-organizing feature maps, simulated annealing, and cellular automata as well as many algorithms long used in statistics and taxonomy. Although active research continues in these areas, there has rarely been clear evidence of any general superiority of one approach and comparable performance is achievable in a variety of ways. Most significant advances have come from either matching a quantization technique to a particular signal decomposition such as the DCT or DWT, or from imposing a low complexity structure onto the quantizer and then doing a constrained clustering design.

A form of vector quantizer that has received much attention in the popular press and the computer magazines is the iterated-function system or fractal code. These codes replace clustering or lattices by an algorithm which finds blocks within an image which can be used to approximate other blocks within the image by simple operations. Decompression is simple because it can be considered as a vector quantizer operating on raw pixels. Results, however, have been at best competitive with the JPEG standard in performance, while requiring higher complexity. Although fractals have proved extremely useful for artificial image generation, they have yet to have a major impact on the compression of natural images.

While neural networks have not shown marked advantages in designing vector quantizers, they hold promise as a means of implementing the quantizers in hardware. Neural nets have also been touted for the ``learning'' implicit in adaptive compressors, but evidence is scant.



next up previous contents
Next: Universal and Adaptive Up: Lossy Compression Algorithms Previous: Wavelets



Vijay K. Madisetti
Mon Jan 30 11:05:18 EST 1995