Block Truncation Coding
Encyclopedia
Block Truncation Coding, or BTC, is a type of lossy image compression technique for greyscale images. It divides the original images into blocks and then uses a quantiser to reduce the number of grey levels in each block whilst maintaining the same mean
Mean
In statistics, mean has two related meanings:* the arithmetic mean .* the expected value of a random variable, which is also called the population mean....

 and standard deviation
Standard deviation
Standard deviation is a widely used measure of variability or diversity used in statistics and probability theory. It shows how much variation or "dispersion" there is from the average...

. It is an early predecessor of the popular hardware DXTC
S3 Texture Compression
S3 Texture Compression is a group of related lossy texture compression algorithms originally developed by Iourcha et al. of S3 Graphics, Ltd. for use in their Savage 3D computer graphics accelerator...

 technique, although BTC compression method was first adapted to colour long before DXTC using a very similar approach called Color Cell Compression
Color Cell Compression
Color Cell Compression is an early lossy image compression algorithm first described by Campbell et. al. in 1986. It is a variant of Block Truncation Coding. The encoding process works on small blocks of pixels...

 . BTC has also been adapted to video compression

BTC was first proposed by Professors Mitchell and Delp at Purdue University. Another variation of BTC is Absolute Moment Block Truncation Coding or AMBTC, in which instead of using the standard deviation the first absolute moment is preserved along with the mean. AMBTC is computationally simpler than BTC. AMBTC was proposed by Maximo Lema and Robert Mitchell.

Using sub-blocks of 4x4 pixels gives a compression ratio of 4:1 assuming 8-bit integer values are used during transmission or storage. Larger blocks allow greater compression ("a" and "b" values spread over more pixels) however quality also reduces with the increase in block size due to the nature of the algorithm.

The BTC algorithm was used for compressing Mars Pathfinder
Mars Pathfinder
Mars Pathfinder was an American spacecraft that landed a base station with roving probe on Mars in 1997. It consisted of a lander, renamed the Carl Sagan Memorial Station, and a lightweight wheeled robotic rover named Sojourner.Launched on December 4, 1996 by NASA aboard a Delta II booster a...

's rover images.

Compression procedure

A 256x256 pixel
Pixel
In digital imaging, a pixel, or pel, is a single point in a raster image, or the smallest addressable screen element in a display device; it is the smallest unit of picture that can be represented or controlled....

image is divided into blocks of typically 4x4 pixels. For each block the Mean and Standard Deviation are calculated, these values change from block to block. These two values define what values the reconstructed or new block will have, in other words the blocks of the BTC compressed image will all have the same mean and standard deviation of the original image. A two level quantization on the block is where we gain the compression, it is performed as follows:



Where are pixel elements of the original block and are elements of the compressed block. In words this can be explained as: If a pixel value is greater than the mean it is assigned the value "1", otherwise "0". Values equal to the mean can have either a "1" or a "0" depending on the preference of the person or organisation implementing the algorithm.

This 16 bit block is stored or transmitted along with the values of Mean and Standard Deviation. Reconstruction is made with two values "a" and "b" which preserve the mean and the standard deviation.
The values of "a" and "b" can be computed as follows:





Where is the standard deviation, m is the total number of pixels in the block and q is the number of pixels greater than the mean ()

To reconstruct the image, or create its approximation, elements assigned a 0 are replaced with the "a" value and elements assigned a 1 are replaced with the "b" value.



This demonstrates that the algorithm is asymmetric in that the encoder has much more work to do than the decoder. This is because the decoder is simply replacing 1's and 0's with the estimated value whereas the encoder is also required to calculate the mean, standard deviation and the two values to use.

Encoder

Take a 4x4 block from an image, in this case the mountain test image:



Like any small block from an image this appears rather boring to work with as the numbers are all quite similar, this is the nature of lossy compression and how it can work so well for images. Now we need to calculate two values from this data, that is the mean and standard deviation. The mean can be computed to 241.875, this is a simple calculation which should require no further explanation. The standard deviation is easily calculated at 4.36. From this the values of "a" and "b" can be calculated using the previous equations. They come out to be 236.935 and 245.718 respectively. The last calculation that needs to be done on the encoding side is to set the matrix to transmit to 1's and 0's so that each pixel can be transmitted as a single bit.


Decoder

Now at the decoder side all we need to do is reassign the "a" and "b" values to the 1 and 0 pixels. This will give us the following block:



As can be seen, the block has been reconstructed with the two values of "a" and "b" as integers (because images aren't defined to store floating point numbers). When working through the theory, this is a good point to calculate the mean and standard deviation of the reconstructed block. They should equal the original mean and standard deviation. Remember to use integers, otherwise much quantization error will become involved, as we previously quantised everything to integers in the encoder.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK