Title: Distributed Source Coding
1Distributed Source Coding
- By
- Raghunadh K Bhattar, EE Dept, IISc
- Under the Guidance of
- Prof.K.R.Ramakrishnan
2Outline of the Presentation
- Introduction
- Why Distributed Source Coding
- Source Coding
- How Source Coding Works
- How Channel Coding Works
- Distributed Source Coding
- Slepian-Wolf Coding
- Wyner-Ziv Coding
- Applications of DSC
- Conclusion
3Why Distributed Source Coding ?
- Low Complexity Encoders
- Error Resilience Robust to transmission errors
- The above two attributes make the DSC an enabling
technology for wireless communications
4Low Complexity Wireless Handsets
Courtesy Nicolas Gehrig
5Distributed Source Coding (DSC)
- Compression of Correlated Sources Separate
Encoding Joint Decoding
6Source Coding (Data Compression)
- Exploit the redundancy in the source to reduce
the data required for storage or for transmission - Highly complex encoders are required for
compression (MPEG, H.264 ) - However, Simple decoders !
- The Highly complex encoders require
- Bulky handsets
- Power consumption
- Battery Life
7How Source Coding Works
- Types of redundancy
- Spatial redundancy - Transform or predictive
coding - Temporal redundancy - Predictive coding
- In predictive coding, the next value in the
sequence is predicted from the past values and
the predicted value is subtracted from the actual
value - The difference is only sent to the decoder
- Let the Past values are in C, the predicted value
is y f(C). If the actual value is x, then (x
y) sent to the decoder.
8- Decoder, knowing the past values C, can also
predict the value of y. - With the knowledge of (x y), the decoder finds
the value of x, which is the desired value
9x - y
x
-
y
Prediction
Past Values (C)
Encoder
Decoder
10(No Transcript)
11Compression Toy Example
Suppose, X and Y Two uniformly distributed
i.i.d Sources. X, Y X ? 3bits If they are
related (i.e., correlated) 000 Y ? 3bits Can
we reduce the data rate? 001 Let the relation
be 010 X and Y differ at most by one
bit 011 i.e., the Hamming distance
100 between X and Y is maximum
one 101 110 111
12H(X) H(Y) 3bits Let Y 101 then X 101 (0),
100 (1), 111 (2), 001 (3) Code X?Y H(X/Y)
2bits Need 2bits to transmit X and 3bits for Y
and total 5 bits for both X and Y instead of
6bits. Here we should know what is the outcome
of Y, then we code the X with 2bits. Decoding
Y ?Code Code 0 000, 1 001, 2 010, 3 100
13- Now assume that, we dont know the outcome of the
Y (but sent to decoder using 3bits), can I still
transmit X using 2 bits? - The Answer is YES (Surprisingly!)
- How?
14Partition
- Group all the symbols into four groups each
consists of two members - (000),(111) 0 Trick Partition each
- (001),(110) 1 set with Hamming
- (010),(101) 2 distance 3
- (100),(011) 3
15- The encoding of X is simply done by sending the
index of the set that actually contains X - Let X (100) then the index for X 3
- Let the decoder already received a correlated Y
(101) - How we recover X knowing the Y (101) (from now
onwards call this as side information) at decoder
and index (3) from X
16- Since, index is 3 we know that the value of X is
either (100) or (011) - Measure the Hamming distance between the two
possible values of X with side information Y - (100)?(101) (001) Ham dis 1
- (011)?(101) (110) Ham dis 2
- ? X (100)
17Source Coding
Y Code X
000 001 001 X
001 001 000 X
010 001 011 X
011 001 010 X
100 001 010 X
101 001 100 ?
110 001 111 X
111 001 110 X
- Y 101
- X 100
- Code (100)?(101)
- 001 1
- Decoding
- Y?Code
- (101)?(001)
- 100 X
18Distributed Source Coding
Side Information Decoding Output
Y 000 011?000 2 100?000 1 X 100
Y 110 011?110 2 100?110 1 X 100
Y 101 011?101 2 100?101 1 X 100
Y 100 011?100 3 100?100 0 X 100
Y 111 011?111 1 100?111 2 X 011
X 100 Code 3
Correlated Side Information
No Error in Decoding
Erroneous Decoding
Uncorrelated Side Information
19- How to partition the input sample space? Always
I have to find some trick ? If input sample space
is large (even if few hundreds), can I still find
the trick??? - The trick is matrix multiplication and we have to
have one such matrix, which partition the input
space. - For above toy example the matrix is
Index XHT in GF(2) field H is the parity
check matrix in Error correction terminology
20Coset Partition
- Now, we see again the partitions.
- (000),(111) 0 This is the repetition code
- (001),(110) 1 (in error correction
- (010),(101) 2 terminology.)
- (100),(011) 3 These are the Cosets of
- the repetition code induced by the elements of
the sample space of X
21Channel Coding
- In channel coding, controlled redundancy is added
to the information bits to protect the them from
channel noise - We can classify channel coding or error control
coding into two categories - Error Detection
- Error Correction
- In Error Detection, the introduced redundancy is
just enough to detect errors - In Error Correction, we need to introduce more
redundancy.
22dmin 1
dmin 2
dmin 3
23Parity Check
- X Parity
- 000 0
- 001 1
- 010 1
- 011 0
- 100 1
- 101 0
- 110 0
- 111 1
Minimum Hamming Distance 2 How to make Hamming
distance 3 ? It is not clear (or not easy to make
minimum hamming distance 3)
24(No Transcript)
25(No Transcript)
26Slepian-Wolf theorem The Slepian-Wolf theorem
states that the correlated sources that dont
communicate each other can be coded at a rate
equal to the rate at which they are coded
jointly. No performance loss occurs if they are
decoded jointly.
- When correlated sources are coded independently,
but decoded jointly, then the minimum data rate
for each source is lower bounded by
- Total data rate should be atleast equal to (or
greater) than H(X,Y) and individual data rates
should be atleast equal to (or greater than)
H(X/Y) and H(Y/X) respectively - J. D. Slepian and J. K. Wolf, Noiseless coding
of correlated information sources, IEEE Trans.
Inf. Theory, vol. 19, pp. 471480, July 1973.
27DISCUS (DIstributed Source Coding UsingSyndrome)
- The first constructive realization of the
Slepian-Wolf boundary using practical channel
codes was proposed where single-parity check
codes were used with the binning scheme. - Wyner first proposed to use capacity achieving
binary linear channel code to solve the SW
compression problem for a class of joint
distributions - DISCUS extended the results of Wyner idea to the
distributed rate-distortion (lossy compression)
problem using channel codes - S. Pradhan and K.Ramchandran, Distributed source
coding using syndrome(DISCUS), in IEEE Data
Compression Conference, DCC-1999, Snowbird,UT,
1999.
28Distributed Source Coding (Compression with
Side Information)
Statistically dependent
Side Information Available at Decoder Lossless
29Achievable Rate Region - SWC
?
Rx gt H(X/Y) Ry gt H(Y)
Ry
Separate Coding No Errors Rx gt H(X) Ry gt H(Y)
H(X,Y)
H(Y)
Achievable Rates with Slepian-Wolf Coding
A
C
Rx gt H(X/Y) Ry gt H(Y)
H(Y/X)
B
Rx Ry H(X,Y) Joint Encoding and Decoding
0
Rx
H(X/Y)
H(X)
H(X,Y)
30(No Transcript)
31How Compression Works ?
Compressed Data (Decorrelated Data)
Redundant Data (Correlated Data)
Remove Redundancy
How Channel Coding Works ?
Decorrelated Data
Correlated Data
Redundant Data Generator
32Duality Between Source Coding and Channel Coding
Source Coding Channel Coding
Compress the Data Expands the Data
De-correlates the Data Correlates the Data
Complex Encoder Simple Encoder
Simple Decoder Complex Decoder
33Channel Coding or Error Correction Coding
Information Bits
k
Channel (Additive Noise)
Channel Decoding
Code Word
n
Parity Bits
n - k
34Channel Codes for DSC
Decompression
Channel Coding
X
x
Channel Decoding
Correlation Model ( Noise)
x
35Turbo Coder for Slepian-Wolf Encoding
Curtsey Anne Aaron and Bernd Girod
36Turbo Decoder for Slepian-Wolf Decoding
Pchannel
SISO Decoder
Pa posteriori
Pextrinsic
Pa priori
Pextrinsic
Pa priori
SISO Decoder
Pchannel
Pa posteriori
Curtsey Anne Aaron and Bernd Girod
37Wyners Scheme
- Use a linear block code, send syndrome
- (n,k) block code, 2(n-k) syndromes, each
corresponding to a set of 2k words of length n. - Each set is a coset code.
- Compression ratio of n(n-k).
A D Wyner, "Recent Results in the Shannon Theory
in IEEE Transactions On Information Theory, VOL.
IT-20, NO. 1, JANUARY 1974 A. D. Wyner, On
source coding with side information at the
decoder, IEEE Trans. Inf. Theory, vol. 21, no.
3, pp. 294300, May 1975.
38Linear Block Codes for DSC
Decompression
Compressed Data
Syndrome Decoding
Syndrome Former
n
n - k
x
Corrupted Codeword
x
H
n
Correlation Model for Side Information
Correlation Model ( Noise)
Compression Ratio
x
39LDPC Encoder (Syndrome Former Generator)
X
Compressed Data
Syndrome (s)
Y
Decompressed Data
s
LDPC Decoder
Entropy Coding
Side Information (Y)
40Correlation Model
41The Wyner-Ziv theorem
- Wyner and Ziv extended the work by Slepian and
Wolf by studying the lossy case in the same
scenario, where signals X and Y are statistically
dependent. - Y is transmitted at a rate equal to its entropy
(Y is then called Side Information) and what
needs to be found is the minimum transmission
rate for X that introduces no more than a certain
distortion D. - The Wyner-Ziv rate-distortion function, which is
the lowest bound for Rx. - For MSE distortion and Gaussian statistics,
rate-distortion functions of the two systems are
the same. - A.D.Wyner and J.Ziv, The rate distortion
function for source coding with side information
at the decoder, IEEE Transactions on Information
theory, vol. 22, no. 1, pp. 110, January 1976.
42Wyner-Ziv Codec
- A codec that intends to separately encode signals
X and Y while jointly decoding them, but does not
aim at recovering them perfectly, it expects some
distortion D in the reconstruction is called a
Wyner-Ziv codec.
43Wyner-Ziv Coding Lossy Compression with Side
Information
RXY (d)
Encoder
Decoder
For MSE distortion and Gaussian statistics,
rate-distortion functions of the two systems are
the same.
The rate loss R(d) RXY (d) is bounded.
R(d)
Encoder
Decoder
44- The structure of the Wyner-Ziv encoding and
decoding - Encoding consists of quantization followed by a
binning operation encoding U into Bin (Coset)
index.
45- Structure of distributed decoders. Decoding
consists of de-binning followed by estimation
46Wyner-Ziv Coding (WZC) - A joint source-channel
coding problem
47Pixel-Domain Wyner-Ziv Residual Video Codec
Wyner-Ziv Decoder
Wyner-Ziv Encoder
WZ frames
Slepian-Wolf Codec
Reconstruction
LDPC Encoder
LDPC Decoder
Scalar Quantizer
X
X
Buffer
-
Request bits
-
Q-1
Side information
Xer
Xer
Y
Frame Memory
Interpolation/ Extrapolation
Key frames
Conventional Intraframe decoding
Conventional Intraframe coding
I
I
48Distributed Video Coding
- Distributed coding is a new paradigm for video
compression, based on Slepian and Wolfs
(lossless coding) and Wyner and Zivs (lossy
coding) information theoretic results. - Enables low-complexity video encoding where the
bulk of the computation is shifted to the
decoder. - A second architectural goal is to allow for far
greater robustness to packet and frame drops. - Useful for wireless video applications by means
of transcoding architecture use.
49PRISM
- PRISM (Power-efficient, Robust, hIgh compression
Syndrome based Multimedia) - The PRISM is a practical video coding framework
built on distributed source coding principles. - Flexible encoding/decoding complexity
- High compression efficiency
- Superior robustness to packet/frame drops
- Light yet rich encoding syntax
- R. Puri, A. Manjumdar, and K.Ramchandran, PRISM
A video coding paradigm with motion estimation at
the decoder, IEEE Transactions on Image
Processing, vol. 16, no. 10, pp. 24362448,
October 2007.
50DIStributed COding for Video sERvices (DISCOVER)
- DISCOVER is a new video coding scheme which has a
strong potential of new applications, targeting
new advances in coding efficiency, error
resilience and scalability - At the encoder side the video is split into two
parts. - The first set of frames called key frame are
encoded with conventional H.264/AVC encoder. - The remaining frames known as Wyner-Ziv frames
which are coded using distributed coding
principle - X.Artigas, J.ascenso, M.Dalai, D.Kubasov, and
M.quaret, The discover codec Architecture,
techniques and evaluation, Picture Coding
Symposium, 2007. - www.discoverdvc.org
51A Typical Distributed Video coding
52Side Information from Motion-Compensated
Interpolation
Decoded WZ frames
Wyner-Ziv Residual Encoder
WZ frame
Wyner-Ziv Residual Decoder
WZ parity bits
W
W
Side information
Y
Decoded frames
Interpolation
Previous key frame as encoder reference
I
I
I
wz
I
I
I
wz
53Wyner-Ziv DCT Video Codec
WZ frames
Decoded WZ frames
W
W
Interframe Decoder
Intraframe Encoder
IDCT
Xk
Xk
qk
qk
Reconstruction
Request bits
bit-plane Mk
Side information
Yk
For each transform band k
DCT
Y
Interpolation/ Extrapolation
Interpolation/ Extrapolation
Key frames
Conventional Intraframe decoding
Conventional Intraframe coding
K
K
54Foreman sequence
After Wyner-Ziv Coding
Side information
16-level quantization (1 bpp)
55Sample Frame (Foreman)
After Wyner-Ziv Coding
Side information
16-level quantization (1 bpp)
56Carphone Sequence
Wyner-Ziv Codec 384 kbps
H263 Intraframe Coding 410 kbps
57Salesman sequence at 10 fps
DCT-based Intracoding 247 kbps PSNRY33.0 dB
Wyner-Ziv DCT codec 256 kbps PSNRY39.1 dB
GOP16
58Salesman sequence at 10 fps
H.263 I-P-P-P 249 kbps PSNRY43.4 dB
GOP16
Wyner-Ziv DCT codec 256 kbps PSNRY39.1 dB
GOP16
59Hall Monitor sequence at 10 fps
DCT-based Intracoding 231 kbps PSNRY33.3 dB
Wyner-Ziv DCT codec 227 kbps PSNRY39.1 dB
GOP16
60Hall Monitor sequence at 10 fps
H.263 I-P-P-P 212 kbps PSNRY43.0 dB
GOP16
Wyner-Ziv DCT codec 227 kbps PSNRY39.1 dB
GOP16
61Facsimile Image Compression with DSCCCITT 8 Image
62Reconstructed Image with 30 errors
63Fax4 Reconstructed Image with 8 errors
64Applications
- Very low complexity encoders
- Compression for networks of cameras
- Error-resilient transmission of signal waveforms
- Digitally enhanced analog transmission
- Unequal error protection without layered coding
- Image authentication
- Random access
- Compression of encrypted signals
65Thank You
Any Questions ?
66Cosets
67- Let G 000,001,111
- Let H 000,111 is a subgroup of G
- Coset are
- 001?000 001
- 001?111 110
- Hence, 001, 110 is one coset
- 010?000 010
- 010?111 101
- 010, 101 is another coset and so on
68Hamming Distance
- Hamming distance is a distance measure defined as
the number of bits two binary sequence differ - Let X and Y be two binary equences, the Hamming
distance between X and Y is defined as - Hamming distance
- Example Let X 0 0 1 1 1 0 1 0 1 0
- Let Y 0 1 0 1 1 1 0 0
1 1 - Hamming distance Sum(0 1 1 0 0 1 1 0 0 1)
5