introduction to polar codes - 九州大学(kyushu … · 2013-08-29 · question i why do polar...

Post on 20-Jun-2018

219 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Introduction to Polar Codes

Ryuhei Mori

Postdoctoral Fellow of ELC and

Tokyo Institute of Technology

Workshop on Modern Error Correcting CodesAugust 30, 2013

Tokyo Japan

Polar codes [Arıkan 2008]

Capacity achieving codes with efficient encoding and decodingalgorithms for any symmetric channels

I O(N logN) encoding and decoding complexity for blocklengthN.

I o(e−N12−ε) error probability for any R < I (W ) and ε > 0.

2 / 39

The 2× 2 matrix

G2 =

[1 01 1

]

u2

u1

x2

x1

[u1 u2

] [1 01 1

]=[x1 x2

]3 / 39

Kronecker product

G4 =

1 0 0 00 0 1 00 1 0 00 0 0 1

[1 01 1

]⊗2=

1 0 0 01 0 1 01 1 0 01 1 1 1

4 / 39

Kronecker product

G2n = (I2n−1 ⊗ G2)R2n(I2 ⊗ G2n−1) = B2nG⊗n2

5 / 39

Encoding of polar codes

0

0

0

0

u1

u2

u3

u4

: frozen bit # check nodes = N logN

6 / 39

Encoding of polar codes

0

0

0

0

u1

u2

u3

u4

: frozen bit # check nodes = N logN

6 / 39

Encoding of polar codes

0

0

0

0

u1

u2

u3

u4

Encoding complexity ∝ # check nodes = N logN

6 / 39

Decoding of polar codes

Successive cancellation (SC) decoding: Sequential decoding fromthe top to the bottom

F : The index set of frozen bits

I If i ∈ F , ui = 0.

I If i /∈ F , ui = arg max0,1W(i)(ui−10 , yN−10 | ui ).

7 / 39

Decoding of polar codes

8 / 39

Decoding of polar codes

ML on a tree ⇐⇒ belief propagation (BP)

8 / 39

Question

I Why do polar codes achieve symmetric capacity with SCdecoding ?

I Which bits should be chosen as information bits ?

Channel polarization

9 / 39

Channel polarization

I UN−10 : Uniform random variable on {0, 1}N .

I XN−10 := UN−1

0 GN .

I Y N−10 : Random variable corresponding output of W n

NI (W ) = I (UN−10 ;Y N−1

0 ) =N−1∑i=0

I (Ui ;YN−10 ,U i−1

0 ) =N−1∑i=0

I (W (i))

where W (i) : Ui 7→ Y N−10 ,U i−1

0 .

Theorem (Channel polarization [Arıkan 2008])

∣∣{i ∈ {0, 1, ... ,N − 1} | I (W (i)) > 1− ε}∣∣

N= I (W )∣∣{i ∈ {0, 1, ... ,N − 1} | I (W (i)) < ε

}∣∣N

= 1− I (W )

10 / 39

Channel polarization

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 256 512 768 1024

Mu

tua

l in

form

atio

n

Index

W = BEC(0.4), I (W ) = 0.6, N = 1024.

11 / 39

Process of subchannels

12 / 39

Binary erasure channel

ε

εε2

ε

ε1− (1− ε)2

Pe(W (1)) = ε2, Pe(W (0)) = 1− (1− ε)2

Pe(W (0)) + Pe(W (1))

2= ε = Pe(W )

13 / 39

Martingale convergence theorem

B1, ... ,Bn: Uniform i.i.d. 0-1 random variables.

Zn := Pe(W (B1)(B2)···(Bn)).

The random process Zn is a martingale, i.e.,

E[Zn+1 | B1, ...Bn] =Z 2n + 1− (1− Zn)2

2= Zn

Theorem (Martingale convergence theorem)

If supn E[Xn] <∞ then a martingale Xn converges almost surely.

14 / 39

Convergence to 0-1 random variable

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Era

su

re p

rob

ab

ility

E

Zn =

{Z 2n−1, w.p. 1

2

1− (1− Zn−1)2, w.p. 12

Z0 = ε

Z∞ takes 1 with prob. ε and takes 0 with prob. 1− ε.15 / 39

From BEC to general channel

For general channel, Pe(W (B1)···(Bn)) is not a martingale.

But, I (W (B1)···(Bn)) is a martingale.

Theorem (Channel polarization [Arıkan 2008])

∣∣{i ∈ {0, 1, ... ,N − 1} | I (W (i)) > 1− ε}∣∣

N= I (W )∣∣{i ∈ {0, 1, ... ,N − 1} | I (W (i)) < ε

}∣∣N

= 1− I (W ).

16 / 39

Construction and error

Bits associated to high I (W (i)) subchannels are chosen asinformation bits.

Bits associated to low I (W (i)) subchannels are chosen as frozenbits.

I (W (i)) (and also Pe(W (i))) can be evaluated by density evolution[Mori and Tanaka 2009].

Bi := { uN1 , yN1 | ui−11 = ui−11 , Ui (yN1 , ui−11 ) 6= ui }

⊆ { uN1 , yN1 | Ui (yN1 , ui−11 ) 6= ui } =: Ai .

Perror(F ) = Pr

(⋃i∈F c

Bi

)=∑i∈F c

Pr(Bi ) ≤∑i∈F c

Pr(Ai ) =∑i∈F c

Pe(W(i)N )

17 / 39

Table of contents

I Speed of polarization [Arıkan and Talatar 2008]

I `× ` construction [Korada, Sasoglu, and Urbanke 2009]

I Detailed speed of polarization [Tanaka and Mori 2010],[Hassani and Urbanke 2010], [Hassani, Mori, Tanaka, andUrbanke 2012]

I Scaling of polarization [Koarada, Montanari, Telatar andUrbanke 2010]

I Compound capacity [Hassani, Korada and Urbanke 2009]

I Non-binary polar codes [Sasoglu, Telatar and Arıkan 2009],[Mori and Tanaka 2010]

18 / 39

Speed of polarization

[Arıkan and Telatar 2008] For any ε > 0

limn→∞

Pr

(Zn < 2−N

12−ε)

= I (W )

limn→∞

Pr

(Zn < 2−N

12+ε)

= 0

Hence, error probability of SC decoding for polar codes is

o

(2−N

12−ε)

and ω

(2−N

12+ε

)for any ε > 0.

19 / 39

Table of contents

I Speed of polarization [Arıkan and Talatar 2008]

I `× ` construction [Korada, Sasoglu, and Urbanke 2009]

I Detailed speed of polarization [Tanaka and Mori 2010],[Hassani and Urbanke 2010], [Hassani, Mori, Tanaka, andUrbanke 2012]

I Scaling of polarization [Koarada, Montanari, Telatar andUrbanke 2010]

I Compound capacity [Hassani, Korada and Urbanke 2009]

I Non-binary polar codes [Sasoglu, Telatar and Arıkan 2009],[Mori and Tanaka 2010]

20 / 39

`× ` matrix

Polar codes using an `× ` matrix G instead of

[1 01 1

]. [Korada,

Sasoglu, and Urbanke 2009]

E (G ) ∈ (0, 1) is called an exponent of G if it holds

limn→∞

Pr(Zn < 2−N

E(G)−ε)

= I (W )

limn→∞

Pr(Zn < 2−N

E(G)+ε)

= 0

for any ε > 0.

Obviously, E

([1 01 1

])= 1

2 .

21 / 39

Exponent of matrix[Korada, Sasoglu and Urbanke 2009]

E (G ) =1

`

∑i=1

log`Di

For `× ` matrix G , partial distance Di is defined as

Di := d(gi , 〈gi+1, ... , g`〉), i = 1, ... , `− 1

D` := d(g`, 0)

Here, gi is i-th row of G and 〈gi , ... , g`〉 is the subcode spanned bygi , ... , g`.

Example 1 0 01 0 11 1 1

1 0 01 0 11 1 0

D1 = 1,D2 = 1,D3 = 3 D1 = 1,D2 = 2,D3 = 2

22 / 39

Exponent of matrix

E (G ) =1

`

∑i=1

log`Di

For

E` := maxG∈{0,1}`×`

E (G ) it holds lim`→∞

E` = 1

E` ≤ 12 for ` ≤ 14 and E16 = 0.51828 > 1

2 .

[Korada, Sasoglu, and Urbanke 2009]

23 / 39

Table of contents

I Speed of polarization [Arıkan and Talatar 2008]

I `× ` construction [Korada, Sasoglu, and Urbanke 2009]

I Detailed speed of polarization [Tanaka and Mori 2010],[Hassani and Urbanke 2010],[Hassani, Mori, Tanaka, and Urbanke 2012]

I Scaling of polarization [Koarada, Montanari, Telatar andUrbanke 2010]

I Compound capacity [Hassani, Korada and Urbanke 2009]

I Non-binary polar codes [Sasoglu, Telatar and Arıkan 2009],[Mori and Tanaka 2010]

24 / 39

Detailed speed of polarization[Tanaka and Mori 2010] [Hassani and Urbanke 2010][Hassani, Mori, Tanaka and Urbanke 2011]For R ∈ (0, 1),

limn→∞

Pr

(Z (Wn) ≤ 2−`

nE(G)+√

nV (G)Q−1(R/I (W ))+f (n)

)= R

for any f (n) = o(√n) where

Q(x) :=

∫ ∞x

1√2π

e−12x2dx

E (G ) :=1

`

∑i=1

log`Di (G )

V (G ) :=1

`

∑i=1

(log`Di (G )− E (G ))2 .

25 / 39

Table of contents

I Speed of polarization [Arıkan and Talatar 2008]

I `× ` construction [Korada, Sasoglu, and Urbanke 2009]

I Detailed speed of polarization [Tanaka and Mori 2010],[Hassani and Urbanke 2010], [Hassani, Mori, Tanaka, andUrbanke 2012]

I Scaling of polarization[Koarada, Montanari, Telatar and Urbanke 2010]

I Compound capacity [Hassani, Korada and Urbanke 2009]

I Non-binary polar codes [Sasoglu, Telatar and Arıkan 2009],[Mori and Tanaka 2010]

26 / 39

Scaling of polar codes BEC(ε = 0.5)

0

0.2

0.4

0.6

0.8

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

P

R

N = 210, 212, 214, 216, 218

27 / 39

Scaling of polar codes BEC(ε = 0.5)

0

0.2

0.4

0.6

0.8

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

P

R

N →∞ while R fixed ⇐⇒ Shannon–Gallager type analysis

27 / 39

Scaling of polar codes BEC(ε = 0.5)

0

0.2

0.4

0.6

0.8

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

P

R

N →∞ while Pe fixed ⇐⇒ Weiss–Dobrushin–Strassen type analysis

27 / 39

Scaling of polarization

[Korada, Montanari, Telatar, and Urbanke 2010][Hassani, Alishahi, and Urbanke 2010]

For arbitrary fixed a ∈ (0, 1)

FN(ε) := Pr(ε ≤ Z (Wn) ≤ a).

Scaling Assumption:There exists µ > 0 (called a scaling parameter) such that for anyε ∈ (0, a],

F (ε) := limN→∞

N1µFN(ε) ∈ (0,∞).

If the scaling assumption holds,

F−1(N1µ (I (W )− R)) ≤ Pe.

28 / 39

Scaling parameter of polar codes

[Korada, Montanari, Telatar, and Urbanke 2010][Hassani, Alishahi, and Urbanke 2010]

Pe ≥ F−1(N1µ (I (W )− R))

NR ≤ NI (W )− N1− 1µF (Pe)

This is Weiss–Dobrushin–Strassen type analysis.

From the scaling assumption

− 1

µ= lim

n→∞

1

nlog Pr(a ≤ Z (Wn) ≤ b)

It can be evaluated for BEC, 1/µ ≈ 0.2757.By Gaussian approximation, for AWGN channel,1/µ ≈ 0.2497.Random codes and LDPC codes have 1/µ = 0.5.

29 / 39

Table of contents

I Speed of polarization [Arıkan and Talatar 2008]

I `× ` construction [Korada, Sasoglu, and Urbanke 2009]

I Detailed speed of polarization [Tanaka and Mori 2010],[Hassani and Urbanke 2010], [Hassani, Mori, Tanaka, andUrbanke 2012]

I Scaling of polarization [Koarada, Montanari, Telatar andUrbanke 2010]

I Compound capacity [Hassani, Korada and Urbanke 2009]

I Non-binary polar codes [Sasoglu, Telatar and Arıkan 2009],[Mori and Tanaka 2010]

30 / 39

Compound capacity of polar codes

[Hassani, Korada, and Urbanke 2009]

W: A set of channels

C (W) = maxPX

infW∈W

I (X ;Y )

The compound capacity of polar codes with SC decoding is

limN→∞

N∑i=1

infW∈W

I (W(i)N )

For BEC(0.5) and BSC(0.11002), the compound capacity is about0.4816.

31 / 39

Table of contents

I Speed of polarization [Arıkan and Talatar 2008]

I `× ` construction [Korada, Sasoglu, and Urbanke 2009]

I Detailed speed of polarization [Tanaka and Mori 2010],[Hassani and Urbanke 2010], [Hassani, Mori, Tanaka, andUrbanke 2012]

I Scaling of polarization [Koarada, Montanari, Telatar andUrbanke 2010]

I Compound capacity [Hassani, Korada and Urbanke 2009]

I Non-binary polar codes [Sasoglu, Telatar and Arıkan 2009],[Mori and Tanaka 2010]

32 / 39

Polarization by non-binary matrix

The matrix [1 01 1

]on the commutative ring Z/qZ polarizes any channel if and only ifq is a prime.[Sasoglu, Telatar, and Arıkan 2009]

The matrix [1 01 γ

]on the field Fq polarizes any channel if and only if Fp(γ) = Fq.[Mori and Tanaka 2010]

Necessary and sufficient condition for `× ` matrix on Fq is alsoobtained in [Mori and Tanaka 2012].

33 / 39

Reed-Solomon matrix [Mori and Tanaka 2010]Let α be a primitive element of Fq.A Reed-Solomon matrix GRS(q) is defined as

αq−2 αq−3 · · · α 1 0

X q−1

X q−2

X q−3

...X1

1 1 · · · 1 1 0

α(q−2)(q−2) α(q−3)(q−2) · · · αq−2 1 0

α(q−2)(q−3) α(q−3)(q−3) · · · αq−3 1 0...

... · · ·...

......

αq−2 αq−3 · · · α 1 01 1 · · · 1 1 1

.

Submatrix which consists of ith row to the last row is a generator matrix ofextended Reed-Solomon code.The size ` of RS matrix is q.

Since GRS(2) =

[1 01 1

], RS matrix can be regarded as a generalization of

Arıkan’s binary matrix

[1 01 1

].

Since Di = i + 1, E(GRS(q)) = log(q!)q log q

34 / 39

Exponent of Reed-Solomon matrix

E (GRS(q)) =log(q!)

q log q

q 2 4 16 64 256

E (GRS(q)) 0.5 0.573120 0.691408 0.770821 0.822264

limq→∞

E (GRS(q)) = 1

The exponent of binary matrix of size smaller than 32 is smallerthan 0.55

[Korada, Sasoglu, and Urbanke 2009]

Reed-Solomon matrix is useful for obtaining large exponent !

How about the performance for finite blocklength ?

35 / 39

Simulation result on BAWGNC (I (W ) = 0.5)

10-4

10-3

10-2

10-1

100

0.3 0.325 0.35 0.375 0.4 0.425 0.45 0.475 0.5

Err

or

pro

ba

bili

ty

Rate

binary polar codes

N=2^7, 2^9, 2^11, 2^13

4-ary polar codes

36 / 39

Polar codes and Reed-Muller codes: binary case

[Arıkan 2009]

X : 1 0 (X2,X1) :(1, 1)(1, 0)(0, 1)(0, 0)

X1

[1 01 1

] X2X1

X2

X1

1

1 0 0 01 1 0 01 0 1 01 1 1 1

00011011

Polar rule: {i ∈ {0, ... , 2n − 1} | Pe(W (i1)···(in)) < ε}Reed-Muller rule: {i ∈ {0, ... , 2n − 1} | i1 + · · ·+ in > k}

Binary polar codes using

[1 01 1

]and binary Reed-Muller codes are

similar.

Reed-Muller rule maximizes the minimum distance.

37 / 39

Polar codes using RS matrix and Reed-Muller codes: q-ary case

(X2,X1) : (2, 2) (2, 1) (2, 0) (1, 2) (1, 1) (1, 0) (0, 2) (0, 1) (0, 0)

X 22X

21

X 22X1

X 22

X2X21

X2X1

X2

X 21

X1

1

1 1 0 1 1 0 0 0 02 1 0 2 1 0 0 0 01 1 1 1 1 1 0 0 02 2 0 1 1 0 0 0 01 2 0 2 1 0 0 0 02 2 2 1 1 1 0 0 01 1 0 1 1 0 1 1 02 1 0 2 1 0 2 1 01 1 1 1 1 1 1 1 1

000102101112202122

Polar rule: {i ∈ {0, ... , qn − 1} | Pe(W (i1)···(in)) < ε}Reed-Muller rule: {i ∈ {0, ... , qn − 1} | i1 + · · ·+ in > k}

Q-ary polar codes using GRS(q) and q-ary Reed-Muller codes are also similar.

Hyperbolic rule: {i ∈ {0, ... , qn − 1} | (i1 + 1) · · · (in + 1) > k}Hyperbolic rule maximizes the minimum distance

(Massey–Costello–Justesen codes, hyperbolic cascaded RS codes).

38 / 39

Summary

I Polar code is provably capacity-achieving codes with efficientdecoding algorithm.

I Error probability of polar codes decays slowly if rate is close tothe capacity.

I Asymptotic performance of polar codes can be improved by`× ` matrix and non-binary matrix.

I Polar code is suitable for many problems, e.g., lossless andlossy source coding, problems with side information, multipleaccess channel, etc.

39 / 39

top related