Title: Image Compression
1Image Compression
Sukhrob Atoev
2Contents
- Introduction
- Image CompressionDecompression Steps
- Error Metrics
- Classifying image data
- Bit Allocation
- Quantization
- Entropy Coding
2
3Introduction
Everyday an enormous amount of information is
stored, processed, and transmitted.
Images take a lot of storage space, suppose we
have 56,000 bps - 4MB will take almost 10
minutes - 1.54 GB will take almost 66 hours
Image compression addresses the problem of
reducing the amount of data requirements to
represent a digital image.
Also it plays an important role in Video
Conferencing, remote sensing, satellite TV, FAX,
document and medical imaging.
3
4Image Compression Decompression Steps
Compressing steps are Specification rate (bits
available) and distortion parameters Classificati
on classes, based on their importance Bit
allocation available bit budget Quantization
quantizing each class using the bit
allocation Encoding encoding each class using
an entropy coder and write to the file.
Decompressing steps are Decoding reading
quantized data from the file, using an entropy
decoder (reverse of step 5) Dequantizing
normalizing the quantized values (reverse of
steps 4 and 3) Rebuilding normalized data into
image pixels (reverse of step 2).
4
5Error Metrics
Compression techniques can be compared by using
two error metrics
Mean Square Error (MSE)
????????20 log 10 255 ??????
peak signal-to-noise ratio (PSNR)
Decompressed image
Original image
M,N dimensions of
the images
5
6Classifying image data
- An image is represented as a two - dimentional
array of coefficients.
- Each coefficient represents the brightness level.
- Natural images have smooth color variations.
- Smooth variations can be termed low frequency
variations and high-frequency variations.
- Separating the smooth variations and details of
the image can be done by using Discrete Cosine
Transform (DCT) and Discrete Wavelet Transform
(DWT)
6
7Bit Allocation
- Each class is allocated a portion of the total
bit budget.
- The compressed image has the minimum possible
distortion.
- The rate-distortion theory is used for solving
the problem of allocating bits.
- We keep reducing one bit at a time until we
achieve optimality.
7
8Bit Allocation
The rate-distortion theory
D total distortion R P(i) x B(i) total
rate P probability B bit allocation
Benefit of a bit is the decrease in distortion
due to receiving that bit.
8
9Quantization
Why? - To reduce number of bits per sample.
- Based on quantization table
- Quantization table values from 1 to 255
- DCT coefficient/Table value
- If value gt 0, keep DCT coefficient
- else, dont keep DCT coefficient
The original image uses one byte (8 bits) for
each pixel. Therefore, the amount of memory
needed for each 8 x 8 block.
- Each DCT coefficient F(u,v) is divided by the
corresponding quantization matrix Q(u,v) and
rounded to the nearest integer - ?? ?? ??,?? ??????????( ??(??,??) ??(??,??) )
9
10Quantization Matrix
16 11 10 16 24 40 51 61
12 12 14 19 26 58 60 55
14 13 16 24 40 57 69 56
14 17 22 29 51 87 80 62
18 22 37 56 68 109 103 77
24 35 55 64 81 104 113 92
49 64 78 87 103 121 120 101
72 92 95 98 112 100 103 99
Luminance quantization matrix
Chrominance quantization matrix
10
11Quantization
150 80 40 14 4 2 1 0
92 75 36 10 6 1 0 0
52 38 26 8 7 4 0 0
12 8 6 4 2 1 0 0
4 3 2 0 0 0 0 0
2 2 1 1 0 0 0 0
1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
150 80 20 4 1 0 0 0
92 75 18 3 1 0 0 0
26 19 13 2 1 0 0 0
3 2 2 1 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
1 1 2 4 8 16 32 64
1 1 2 4 8 16 32 64
2 2 2 4 8 16 32 64
4 4 4 4 8 16 32 64
8 8 8 8 8 16 32 64
16 16 16 16 16 16 32 64
32 32 32 32 32 32 32 64
64 64 64 64 64 64 64 64
DCT coefficient
Quantization matrix
Quantized coefficient
11
12Quantization
Order in which the quantized values are
transmitted
Zig-Zag Sequence
12
1x64
13Entropy Coding
Entropy coder give additional
compression.
- Entropy Coding Algorithms
- Huffman Coding
- Arithmetic Coding
- Golomb Coding (special case of the Huffman coding)
13
14Huffman Coding
Optimal codeword design method that is simple to
use and which is uniquely decodable and which
result in lowest average bit rate is Huffman
coding.
A a1, a2, a3, ,an symbols of size n P
P1, P2, P3, , Pn weights of symbols
(probability)
Input
Code C(A,P) (C1, C2, C3, , Cn)
Binary Codewords R(??) ??1 ?? ????
??????????h (????) Average Bit
Rate H(??) ????gt0 ?? ???? log 2 1 ????
- ????gt0 ?? ???? log 2 ????
Entropy
Output
14
15Huffman Coding
Message Codeword Probability
a1 0 P1 5/8
a2 100 P2 3/32
a3 110 P3 3/32
a4 1110 P4 1/32
a5 101 P5 1/8
a6 1111 P6 1/32
0
.
.
a11 (1)
0
0
1
.
.
.
a10 (3/8)
a9 (7/32)
0
1
.
.
a8 (5/32)
1
.
1
.
0
.
.
1
a7 (1/16)
Illustration of codeword generation in Huffman
coding
15
16Huffman Coding
Huffman coding
R C i1 n Pi length Ci 5 8 1 3 32 3 3
32 3 1 32 4 1 8 3 1 32 41,8125
????????/????????????
Entropy
H(A) Pigt0 n Pi log 2 1 Pi - Pigt0 n Pi
log 2 Pi - 5 8 log 2 5 8 3 32 log
2 3 32 3 32 log 2 3 32 1 32 log
2 1 32 1 8 log 2 1 8 1 32 log 2 1
32 1,752 bits/symbol
16
17Arithmetic coding
- Arithmetic coding completely bypasses the idea of
replacing an input symbol with a specific code. - The arithmetic coder maintains two numbers, low
and high, which represent a subinterval
low,high) of the range 0,1) Initially low0 and
high1. - The range between low and high is divided between
the symbols, according to their probabilities. - Numbers will be shown as decimal, but obviously
they are always binary.
17
18Arithmetic coding
Message a1, a2, a3, a3, a4
Message 0.068
18
19Arithmetic coding vs. Huffman coding
- In tipical English text, the space character is
the most common, with a probability of about 18,
so Huffman redundancy is quite small. - In black and white images, arithmetic coding is
much better than Huffman coding, unless a
blocking technique is used. - Arithmetic coding requires less memory, as symbol
representation is calculated on the fly. - Arithmetic coding is more suitable for high
performance models, where there are confident
predictions.
19
20Arithmetic coding vs. Huffman coding
- Huffman decoding is generally faster than
arithmetic decoding. - In arithmetic coding it is not easy to start
decoding in the middle of the stream, while in
Huffman coding we can use starting points. - In large collections of text and images, Huffman
coding is likely to be used for the text, and
arithmeting coding for the images.
20
21Sample. Compressing Image
using System using System.Collections.Generic us
ing System.ComponentModel using
System.Data using System.Drawing using
System.Linq using System.Text using
System.Threading.Tasks using System.Windows.Forms
using System.Drawing.Imaging namespace
WindowsFormsApplication1 public partial
class Form1 Form public Form1()
InitializeComponent()
private void Form1_Load(object
sender, EventArgs e) using
(Bitmap bmp1 new Bitmap("C\\Sukhrob\\Pier.jpg")
) ImageCodecInfo
jpgEncoder GetEncoder(ImageFormat.Jpeg)
// Create an Encoder object for the Quality
parameter category.
System.Drawing.Imaging.Encoder myEncoder
System.Drawing.Imaging.Encoder.Quality
21
22// Create an Encoder Parameters object.
EncoderParameters myEncoderParameters new
EncoderParameters(1) EncoderParameter
myEncoderParameter new EncoderParameter(myEncode
r, 50L) myEncoderParameters.Param0
myEncoderParameter
bmp1.Save("C\\Sukhrob\\PierMedium.jpg",
jpgEncoder, myEncoderParameters)
myEncoderParameter new EncoderParameter(myEncod
er, 100L) myEncoderParameters.Param
0 myEncoderParameter
bmp1.Save("C\\Sukhrob\\PierHigh.jpg",
jpgEncoder, myEncoderParameters)
myEncoderParameter new EncoderParameter(myEncod
er, 0L) myEncoderParameters.Param0
myEncoderParameter
bmp1.Save("C\\Sukhrob\\PierLow.jpg", jpgEncoder,
myEncoderParameters)
private ImageCodecInfo GetEncoder(ImageFormat
format) ImageCodecInfo codecs
ImageCodecInfo.GetImageDecoders() foreach
(ImageCodecInfo codec in codecs)
if (codec.FormatID format.Guid)
return codec
return null
22
23Results
High Quality Image (652 kb)
Original Image (826 kb)
23
24Results
Low Quality Image (20.5 kb)
Medium Quality Image (83.1 kb)
24
25The end