A simple black and white digitized image consists of a rectangular or square array of pixels (say 256 by 256). Each of these pixels is assigned a number that indicates how light or dark that particular pixel is in the image; usually there are 256 gray levels, with 0 corresponding to black (no brightness on your screen) and 255 to white (maximum brightness). This means that one pixel uses 8 bits; for the whole 256x256 image this gives 256 * 256 * 256 = 16777216 bits, which is 16777216/256 = 65536 bytes or 65536/1024 = 64 KB. Color images of course use a bit more memory and movies, with 18-25 images per second, use much more memory.
That is why it is very important to be able to compress images. In many cases, you can obtain images that are so close to the original image as to be virtualy indistinguishable, but that are in fact stored on only a fraction of the space originally needed. For some applications, you may even be happy with a noticeable distortion if the image still looks pretty good, provided the memory savings are really huge.
There exists several types of algorithms that compress images. We shall illustrate one of them in this lab, called "subband filtering"; it is related to a mathematical concept called the "wavelet transform".