dc-04-optimum receivers for awgn channels
DESCRIPTION
advanced digital commnTRANSCRIPT
Chapter 4 Chapter 4 Optimum Receiver for AWGN ChannelsOptimum Receiver for AWGN Channels
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
1
ContentsContents
◊ 4.1 Waveform and Vector Channel Models
◊ 4.2 Waveform and Vector AWGN Channels
◊ 4.3 Optimal Detection and Error Probability for Band-LimitedSignaling
◊ 4.4 Optimal Detection and Error Probability for Power-LimitedSignaling
◊ 4.5 Optimal Detection in Presence of Uncertainty: Non-Coherent Detection
◊ 4.6 A Comparison of Digital Signaling Methods◊ 4.10 Performance Analysis for Wireline and Radio
2
Communication Systems
Chapter 4.1Chapter 4.1Waveform and Vector Channel Models Waveform and Vector Channel Models
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
3
Waveform & Vector Channel Models(1)Waveform & Vector Channel Models(1)( )( )
◊ In this chapter, we study the effect of noise on the reliability of the modulation systems studied in Chapter3.
◊ We assume that the transmitter sends digital information by use of M signals waveforms {sm(t)=1,2,…,M }. Each waveform is transmitted within the symbol interval of duration T, i.e. 0≤t≤T.Th dditi hit G i i (AWGN) h l d l◊ The additive white Gaussian noise (AWGN) channel model:
)()()( tntstr m +=
◊ sm(t): transmitted signal(t) l f ti f AWGN◊ n(t) : sample function of AWGN process
with PSD: Φnn( f )=N0/2 (W/Hz)
4
Waveform & Vector Channel Models(2)Waveform & Vector Channel Models(2)( )( )
◊ Based on the observed signal r(t), the receiver makes the decision about which message m, 1≤m≤ M was transmitted◊ Optimum decision: minimize error probability ]ˆ[P mmPe ≠=
◊ Any orthonormal basis {φj(t), 1 ≤ j ≤ N} can be used for expansion of a zero-mean white Gaussian process (Problem 2.8-1)
The resulting coefficients are iid zero mean Gaussian random variables◊ The resulting coefficients are iid zero-mean Gaussian random variables with variance N0/2
◊ {φj(t), 1 ≤ j ≤ N} can be used for expansion of noise n(t)j
◊ Using {φj(t), 1 ≤ j ≤ N}, has the vector form)()()( tntstr m +=
nsr += m
◊ All vectors are N-dimensional◊ Components in n are i.i.d. zero-mean Gaussian with variance N0/2
5
Waveform & Vector Channel Models(3)Waveform & Vector Channel Models(3)( )( )
◊ It is convenient to subdivide the receiver into two parts—the signal demodulator and the detector.
◊ The function of the signal demodulator is to convert the received waveform r(t) into an N-dimensional vector r=[r1 r2
] h N i h di i f h i d i l..…rN ] where N is the dimension of the transmitted signal waveform.The function of the d t t is to decide which of the M◊ The function of the detector is to decide which of the Mpossible signal waveforms was transmitted based on the vector r.
6
r.
Waveform & Vector Channel Models(4)Waveform & Vector Channel Models(4)( )( )
◊ Two realizations of the signal demodulator are described in the next two sections:◊ One is based on the use of signal correlators.◊ The second is based on the use of matched filters.
◊ The optimum detector that follows the signal demodulator is designed to minimize the probability of error.
7
Chapter 4.1Chapter 4.1--11Chapter 4.1Chapter 4.1 1 1 Optimal Detection for General Vector ChannelOptimal Detection for General Vector Channel
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
8
4.14.1--1 Optimal Detection for General Vector Channel (1)1 Optimal Detection for General Vector Channel (1)p ( )p ( )
◊ AWGN channel model: nsr += m
◊ Message m is chosen from the set {1,2,..,M} with probability Pm
◊ Components in n are i.i.d. N(0, N0/2) ; PDF of noise n is2
21
2
11 NnNnN
N
j j−−
⎟⎞
⎜⎛∑
⎟⎞
⎜⎛ =
◊ A more general vector channel model:
02
0
2
0
11)( NeN
eN
np ⎟⎟⎠
⎞⎜⎜⎝
⎛=⎟
⎟⎠
⎞⎜⎜⎝
⎛=
ππσ
◊ A more general vector channel model:
◊ sm is selected from {sm,1 ≤ j ≤ M} according to a priori probability Pm
◊ The received vector r statistically depends on the transmitted vector
9
y pthrough the conditional PDF p(r|sm).
4.14.1--1 Optimal Detection for General Vector Channel (2)1 Optimal Detection for General Vector Channel (2)p ( )p ( )
◊ Based on the observation r, receiver decides which message is transmitted,
◊ Decision function: ˆ {1, 2,..., }m M∈
( ):a function from into messages {1, 2,..., }Ng R Mr◊ Given r is received, the probability of correct detection:
]|sent ˆ[P]|decisioncorrect [P rr m=
◊ The probability of correct detectionˆP[correct decision] P[correct decision| ] ( ) P[ sent| ] ( )p d m p d= =∫ ∫r r r r r r
◊ Max P[correct detection]= max for each r◊ Optimal decision rule:
∫ ∫ˆP[ | ]m r
]|[Pmax arg)(g ˆ1
opt rr mmMm≤≤
==
arg max P[ | ]m= s r
10
1m
m M≤ ≤
MAP and ML ReceiversMAP and ML Receivers
◊ Optimum decision rule: opt1
ˆ g ( ) arg max P[ | ]mm M
m≤ ≤
= =r s r
Maximize a posteriori probability (MAP) rule◊ MAP rule can be simplified
1 m M≤ ≤
mopt
1 1
( , ) ( ) ( ) ( | )ˆ g ( ) arg max arg max arg max( ) ( ) ( )
m m m m
m M m M
p p p P pmp p p≤ ≤ ≤ ≤
= = = =r s r | s s r sr
r r rarg max ( | )P p= r s
◊ When the messages are equiprobable a priori, P1=…=PM=1/Mm
1arg max ( | )m
m MP p
≤ ≤= r s
)|(maxarg)(gˆ pm srr ==
◊ p(r|sm) is called likelihood of message m◊ Maximum-likelihood (ML) receiver
)|(maxarg)(g1
opt mMm
pm srr≤≤
◊ Maximum likelihood (ML) receiver
Note: p(r|sm) is channel conditional PDF.
11
The Decision RegionThe Decision Regiongg
◊ For any detector, {1, 2,..., }NR M→
⇒Partition the output space RN into M regions (D1, D2,…, DM), if r∈ Dm, then ˆ ( )m g m= =rm
◊ Dm is decision region for message m
◊ For a MAP detector,} and 1 allfor ],|[P]|[P:{ mmMmmmD N
m ≠′≤′≤′>∈= rrr R
◊ If more than one messages achieve the maximum a posteriori
m
g pprobability, arbitrary assign r to one of the decision regions
12
The Error Probability (1)The Error Probability (1)y ( )y ( )
◊ When sm is transmitted, an error occurs when r is not in Dm
◊ Symbol error probability of a receiver with {Dm, 1 ≤ m ≤ M}
P[ sent]M M
P P D P P= ∉ =∑ ∑r s
◊ Pe|m is the error probability when message m is transmitted
|1 1
P[ sent]e m m m m e mm m
P P D P P= =
= ∉ =∑ ∑r s
e|m p y g
|1
( | ) ( | )cm m
M
e m m mD Dm M
P p d p d′′≤ ≤
= = ∑∫ ∫r s r r s r
◊ Symbol error probability (or message error probability) m m′≠
∑ ∫∑M
dPP )|( rsr∑ ∫∑≠′≤′≤= ′
=
mmMm
D mm
me dpPPm
11
)|( rsr
13
The Error Probability (2)The Error Probability (2)y ( )y ( )
◊ Another type of error probability: Bit error probability Pb: error probability in transmission of a single bit◊ Requires knowledge of how different bit sequences are
mapped to signal points◊ Finding bit error probability is not easy unless the constellation
hibit t i t i texhibits certain symmetric property
R l i b b l b d bi b◊ Relation between symbol error prob. and bit error prob.◊ e
b e bP P P kPk
≤ ≤ ≤k
14
Sufficient Statistics (An Example)Sufficient Statistics (An Example)( p )( p )
◊ Assumption (1) :the observation r can be written in terms of r1and r2, r = (r1, r2)
◊ Assumption (2): 1 2 1 2 1( , | ) ( | ) ( | ) m mp p p=r r s r s r r
◊ The MAP detection becomesˆ ( | ) ( | )m m 1 2
1 1
m 1 2 1 m 1
ˆ arg max ( | ) arg max ( , | )
arg max ( | ) ( | ) arg max ( | )
m mm M m M
m m
m P p P p
P p p P p≤ ≤ ≤ ≤
= =
= =
r s r r s
r s r r r s
◊ Under these assumptions, the optimal detection◊ Based only on r1 r1 : sufficient statistics for detection of sm
1 1m M m M≤ ≤ ≤ ≤
y 1 1 ff m
◊ r2 can be ignored r2 : irrelevant data or irrelevant information
◊ Recognizing sufficient statistics helps to reduce complexity of
15
the detection through ignoring irrelevant data.
Preprocessing at the Receiver (1)Preprocessing at the Receiver (1)p g ( )p g ( )
sm r ρ m̂
◊ Assume that the receiver applies an invertible operation G(r)
Channel G(r) Detectorm r ρ m
before detection◊ The optimal detection is
m m1 1
m
ˆ arg max ( , | ) arg max ( | ) ( | )
arg max ( | )
m mm M m M
m
m P p P p p
P p≤ ≤ ≤ ≤
= =
=
r s r s r
r s
ρ ρ
◊ when r is given, ρ does not depend on sm
◊ The optimal detector based on the observation of ρ makes the
m1g ( | )m
m Mp
≤ ≤
◊ The optimal detector based on the observation of ρ makes the same decision as the optimal detector based on the observation of r.
16
◊ The invertible does not change the optimality of the receiver
Preprocessing at the Receiver (2)Preprocessing at the Receiver (2)p g ( )p g ( )
◊ Ex.4.1-3 Assume that the received vector is of the form
where n is colored noise. Let us further assume that there existing ms= +r n
an invertible whitening operator denoted by W, s.t. v = Wn is a white vector.
◊ Considermρ = = +Wr Ws v
◊ Equivalent to a channel with white noise for detection ◊ No degradation on the performance◊ The linear operator W is called a whitening filter◊ The linear operator W is called a whitening filter
17
Chapter 4.2Chapter 4.2Waveform and Vector AWGN Channels Waveform and Vector AWGN Channels
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
18
Waveform & Vector AWGN Channels(1)Waveform & Vector AWGN Channels(1)( )( )
◊ Waveform AWGN channel:
◊ with prior probability Pm
( ) ( ) ( )mt s t n t= +r
1 2( ) { ( ), ( ),...., ( )}m Ms t s t s t s t∈◊ n(t) : zero-mean white Gaussian with PSD N0/2
B G S h idt d d i th l b i◊ By Gram-Schmidt procedure, we derive an orthonormal basis {φj(t), 1≤ j ≤ N}, and vector representation of signals {sm, 1≤ m ≤ M}
◊ Noise process n(t) is decomposed into two components:( ) ( ) h ( ) ( )
N
φ φ∑◊
◊
11
( ) ( ), where ( ), ( ) j j j jj
n t n t n n t tφ φ=
= =∑2 1( ) ( ) ( )n t n t n t= −
19
Waveform & Vector AWGN Channels(2)Waveform & Vector AWGN Channels(2)( )( )
◊1
( ) ( ), where ( ), ( ) N
m mj j mj m jj
s t s t s s t tφ φ=
= =∑◊
◊ Define
1j=
21
( ) ( ) ( ) ( ) N
mj j jj
r t s n t n tφ=
= + +∑r s n= +◊ Define
where j mj jr s n= +
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )r s t t n t t s t n t t r t tφ φ φ φ= + = + =
◊ So
( ), ( ) ( ), ( ) ( ) ( ), ( ) ( ), ( ) j m j j m j jr s t t n t t s t n t t r t tφ φ φ φ+ +
2 j1
( ) ( ) ( ) where r(t), ( )N
j j jj
r t r t n t r tφ φ= + =∑
◊ The noise components {nj} are iid zero-mean Gaussian with
1j=
p { j}variance N0/2
20
Waveform & Vector AWGN Channels(3)Waveform & Vector AWGN Channels(3)( )( )
◊ Prove that the noise components {nj} are iid zero-mean Gaussian jwith variance N0/2
( ) ( ) j jn n t t dtφ∞
= ∫
◊
j j−∞∫
[ ] ( ) ( ) [ ( )] ( ) 0j j jn n t t dt n t t dtφ φ∞ ∞
−∞ −∞
⎡ ⎤= = =⎢ ⎥⎣ ⎦∫ ∫E E E (Zero-Mean)
◊
∞ ∞⎢ ⎥⎣ ⎦∫ ∫[ ] [ ] [ ] [ ] ( ) ( ) ( ) ( )i j i j i j i jn n n n n n n t t dt n s t dsφ φ
∞ ∞
−∞ −∞
⎡ ⎤= − = ⎢ ⎥⎣ ⎦∫ ∫COV E E E E
[ ( ) ( )] ( ) ( )
( / 2) ( ) ( ) ( )
i jn t n s t s dtds
N t t dt d
φ φ
δ φ φ
∞ ∞
−∞ −∞
∞ ∞
=
⎡ ⎤
∫ ∫
∫ ∫
Εn(t) is white.
0 ( / 2) ( ) ( ) ( )
( / 2
i jN t s t dt s ds
N
δ φ φ−∞ −∞
⎡ ⎤= −⎢ ⎥⎣ ⎦
=
∫ ∫0 / 2
) ( ) ( )N i j
s s dsφ φ∞ =⎧
= ⎨∫21
0 ( / 2N= ) ( ) ( )0 i js s ds
i jφ φ
−∞= ⎨ ≠⎩
∫
Waveform & Vector AWGN Channels(4)Waveform & Vector AWGN Channels(4)( )( )
◊ 2 2 1[ ( )] [ ( )] [ ( )] [ ( )]j j j jn n t n n t n n t n n t= = −COV E E E
1
( ) ( ) ( ) ( )
j j j j
N
j j i ii
n t n s s ds n n tφ φ∞
−∞=
⎡ ⎤⎡ ⎤= − ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦∑∫E E
0 0 ( ) ( ) ( ) 2 2j j
N Nt s s ds tδ φ φ∞
−∞= − −∫
0 0 ( ) ( ) 02 2j j
N Nt tφ φ= − =
◊ n2(t) is uncorrelated with {nj}n2(t) and n1(t) are independent2( ) 1( ) p
22
Waveform & Vector AWGN Channels(5)Waveform & Vector AWGN Channels(5)( )( )
◊ Since is n2(t) independent of sm(t) and n1(t)N
◊ 21
( ) ( ) ( ) ( ) N
mj j jj
r t s n t n tφ=
= + +∑
◊ Only the first component carries information◊ Second component is irrelevant data and can be ignored
◊ The AWGN waveform channel( ) ( ) ( ), 1 mr t s t n t m M= + ≤ ≤
is equivalent to N-dimensional vector channel, 1 m m M= + ≤ ≤r s nm
23
Chapter 4.2Chapter 4.2--1 1 Optimal Detection for the Vector AWGN Optimal Detection for the Vector AWGN
Channel Channel
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
24
4.24.2--1 Optimal Detection for the Vector AWGN Channel (1)1 Optimal Detection for the Vector AWGN Channel (1)p ( )p ( )
◊ The MAP detector in AWGN channel is
1ˆ arg max[ ( | )]
arg max [ ( )]
m mm M
m P p
P p≤ ≤
=
= −
r s
r sm= +r s n
2
0
1 arg max [ ( )]
1arg max Pm
m mm M
N
N
P p
e
≤ ≤
−−
=
⎡ ⎤⎛ ⎞⎢ ⎥⎜ ⎟
n
r s
r s0~ ( , )
2NNn 0 I
0
2
m1 0
arg max Pm M
eNπ≤ ≤
−
⎢ ⎥= ⎜ ⎟⎜ ⎟⎢ ⎥⎝ ⎠⎣ ⎦⎡ ⎤r s 0
1 is a constantN
Nπ
⎛ ⎞⎜ ⎟⎜ ⎟⎝ ⎠
0m
1 arg max P
m
N
m Me
−
≤ ≤
⎡ ⎤⎢ ⎥=⎢ ⎥⎣ ⎦
r s
ln(x) is increasing
0⎝ ⎠
2
m1 0
arg max ln P m
m M N≤ ≤
⎡ ⎤−= −⎢ ⎥
⎢ ⎥⎣ ⎦
r sln(x) is increasing
25
0⎢ ⎥⎣ ⎦
4.24.2--1 Optimal Detection for the Vector AWGN Channel (2)1 Optimal Detection for the Vector AWGN Channel (2)p ( )p ( )
2(b)ˆ arg max ln P mm
⎡ ⎤−= −⎢ ⎥
r sm
1 0
(c) 20
arg max ln P
1arg max ln P
m Mm
N
N
≤ ≤= ⎢ ⎥
⎢ ⎥⎣ ⎦⎡ ⎤⎢ ⎥r s
Multiply N0/2
0m
1
2 20
arg max ln P2 2
1arg max ln P ( 2 )
mm M
N≤ ≤
= − −⎢ ⎥⎣ ⎦⎡ ⎤= − + − ⋅⎢ ⎥
r s
r s r s 2m1
( )0
arg max ln P ( 2 )2 2
1arg max ln P
m mm M
d N E
≤ ≤= − + − ⋅⎢ ⎥⎣ ⎦
⎡ ⎤= − + ⋅⎢ ⎥
r s r s
r s
2
2 m mE
is dropped
=s
r
0m
1ln P2 2m m
N Eη = −[ ]
m1
(e)
arg max ln P2 2
arg max
m mm M
m m
E
η
≤ ≤+⎢ ⎥⎣ ⎦
= + ⋅
r s
r sBi t[ ]
1g m m
m Mη
≤ ≤ Bias term
(MAP)
26
4.24.2--1 Optimal Detection for the Vector AWGN Channel (3)1 Optimal Detection for the Vector AWGN Channel (3)
◊ MAP decision rule for AWGN vector channel:
p ( )p ( )
[ ]1
0
ˆ arg max
1ln
m mm M
E
m
N Pη
η≤ ≤
= + ⋅r s
◊ If Pm=1/M, for all m, the optimal decision becomes
0 ln2 2 mm m EPη = −
2 21N⎡ ⎤ ⎡ ⎤2 20
1 1
1ˆ arg max ln arg
a
m
r
ax 2
g2
m ni
m m mm M m M
Nm P≤ ≤ ≤ ≤
⎡ ⎤ ⎡ ⎤= − − = − −⎢ ⎥ ⎣ ⎦⎣ ⎦= −
r s r
r s
s
◊ Nearest neighbor or minimum distance detector◊ Signals are equiprobable in AWGN channel
1 arg m n i m
m M≤ ≤r s
◊ Signals are equiprobable in AWGN channelMAP=ML=minimum distance
27
4.24.2--1 Optimal Detection for the Vector AWGN Channel (4)1 Optimal Detection for the Vector AWGN Channel (4)
◊ For minimum-distance detector, boundary of decisions Dm and
p ( )p ( )
Dm’ are equidistant from sm and sm’
◊ Right figure:◊ 2-dim constellation (N=2)◊ 4 signal points (M=4)
◊ When signals are equiprobableand have equal energyq gy◊
◊
0 1ln indep of 2 2m m m
N P E mη = −
ˆ arg max mm = ⋅r s1 m M≤ ≤
28
4.24.2--1 Optimal Detection for the Vector AWGN Channel (5)1 Optimal Detection for the Vector AWGN Channel (5)
◊ In general, the decision region is
p ( )p ( )
◊ Each region is described in terms of at most M-1 inequalities{ }' ': , for all 1 and N
m m m m mD m M m mη η ′ ′= ∈ ⋅ + > ⋅ + ≤ ≤ ≠r r s r s (4.2-20)
◊ For each boundary: equation of a hyperplane
∵
( )m m m mη η′ ′⋅ − > −r s s
( ) ( )r t s t dt∞
∫r s◊ ∵
∴ i AWGN h l
( ) ( )m mr t s t dt−∞
⋅ = ∫r s2 2 ( )m m mE s t dt
∞
−∞= = ∫s
∴ in AWGN channel,20
1
1ˆ : arg max ln ( ) ( ) (P2
MA )2m m m
Nm P r t s t dt s t dt∞ ∞
−∞ −∞
⎡ ⎤= + −⎢ ⎥⎣ ⎦∫ ∫1 2 2m M ∞ ∞≤ ≤
⎢ ⎥⎣ ⎦∫ ∫2
1
1ˆ arg maML x ( ) ( ) ( )2
: m mm M
m r t s t dt s t dt∞ ∞
−∞ −∞≤ ≤
⎡ ⎤= −⎢ ⎥⎣ ⎦∫ ∫
29
⎣ ⎦
4.24.2--1 Optimal Detection for the Vector AWGN Channel (6)1 Optimal Detection for the Vector AWGN Channel (6)
◊ Distance metric: Euclidean distance between r and sm
p ( )p ( )
◊ Modified distance metric: distance when ||r||2 is removed( )2 2( , ) ( ) ( )m m mD r t s t dt
∞
−∞= − = −∫r s r s
◊ Correlation metric : negative of modified distance metric
2( , ) 2m m mD′ = − ⋅ +r s r s s
g
◊ With these definitions,
2 2( , ) 2 2 ( ) ( ) ( )m m m m mC r t s t dt s t dt∞ ∞
−∞ −∞= ⋅ − = −∫ ∫r s r s s
,◊ [ ]
[ ]0
1ˆ : arg max ln ( , )
arg m
M
ax ln ( )
AP m mm M
m N P D
N P C≤ ≤
= −
= +
r s
r s
◊
[ ]01
arg max ln ( , )m mm M
N P C≤ ≤
= + r s
1ˆ : arg ma ( )M xL , m
m Mm C
≤ ≤= r s
30
Optimal Detection for Binary Antipodal Signaling (1)Optimal Detection for Binary Antipodal Signaling (1)p y p g g ( )p y p g g ( )
◊ In binary antipodal signaling,◊ s1(t) =s(t), with p1=p◊ s2(t) = −s(t), with p2=1−p
◊ Vector representation (N=1) is◊ 1 S bE E= =s
F (4 2 20)◊
◊
2 S bE E= − = −s
0 01
1 1: ln ln(1 )2 2 2 2b b b b
N ND r r E p E r E p E⎧ ⎫= + − > − + − −⎨ ⎬⎩ ⎭
From (4.2-20)
0
2 2 2 2
1 : ln 4
N pr rE
⎩ ⎭⎧ ⎫−⎪ ⎪= >⎨ ⎬⎪ ⎪ 1N
{ }4
:b
th
pE
r r r⎪ ⎪⎩ ⎭
= >
0 1ln 4th
b
N prpE
−=
31
Optimal Detection for Binary Antipodal Signaling (2)Optimal Detection for Binary Antipodal Signaling (2)p y p g g ( )p y p g g ( )
◊ 01
1: ln 4th
N pD r r rE
⎧ ⎫−⎪ ⎪= > ≡⎨ ⎬⎪ ⎪
◊ When p 0, rth ∞, entire real line becomes D2
When p 1 r ∞ entire real line becomes D
1 4thb pE
⎨ ⎬⎪ ⎪⎩ ⎭
◊ When p 1, rth −∞, entire real line becomes D1
◊ When p=1/2, rth =0, minimum distance rule◊ Error probability of MAP receiver:◊ Error probability of MAP receiver:
2
1 1 2( | )
me m mD
m mP P P d
′′= ≤ ≤
= ∑ ∑ ∫ r s r
( ) ( )2 1
| (1 ) |m m
b bD Dp p r s E dr p p r s E dr
′≠
= = + − = −∫ ∫( ) ( ) | (1 ) |th
th
r
b brp p r s E dr p p r s E dr
∞
−∞= = + − = −∫ ∫
32
Optimal Detection for Binary Antipodal Signaling (3)Optimal Detection for Binary Antipodal Signaling (3)p y p g g ( )p y p g g ( )
◊ ( ) ( )| (1 ) |thr
e b bP p p r s E dr p p r s E dr∞
= = + − = −∫ ∫( ) ( )( ) ( )0 0
| ( ) |
, / 2 (1 ) , / 2
the b br
b th b th
p p p p
pP N E N r p P N E N r
−∞
⎡ ⎤ ⎡ ⎤= < + − − >⎣ ⎦ ⎣ ⎦
∫ ∫
0 0
(1 )/ 2 / 2
b th th bE r r EpQ p Q
N N
⎛ ⎞ ⎛ ⎞− += + −⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟
⎝ ⎠ ⎝ ⎠
[ ]( ) P (0,1)( ) 1 ( )
Q x N xQ x Q x
= >
− = −
◊ When p=1/2, rth =0, the error probability is simplified as0 0⎝ ⎠ ⎝ ⎠
2 bEP Q⎛ ⎞⎜ ⎟ ( )⎡ ⎤
◊ Since the system is binary, Pe=Pb
0
beP Q
N= ⎜ ⎟⎜ ⎟
⎝ ⎠( )
( )0
0
, / 2
1 , / 2
b th
b th
P N E N r
P N E N r
⎡ ⎤<⎣ ⎦⎡ ⎤= − >⎣ ⎦⎛ ⎞
y y, e b
0
12
th b
b th
r EQ
N
E r
⎛ ⎞−= − ⎜ ⎟⎜ ⎟
⎝ ⎠⎛ ⎞−⎜ ⎟
33
0 2b thE r
QN
= ⎜ ⎟⎜ ⎟⎝ ⎠
Error Probability for Error Probability for EquiprobableEquiprobable Binary Signaling Schemes (1)Binary Signaling Schemes (1)yy q pq p y g g ( )y g g ( )
◊ In AWGN channel, transmitter transmits either s1(t) or s2(t), and assume two signals are equiprobable◊ Equiprobable in AWGN channel decision regions are separated by
th di l bi t f th li ti dthe prependicular bisector of the line connecting s1 and s2
◊ Error probabilities when s1 or s2 is transmitted are equal◊ When s is transmitted error occurs when◊ When s1 is transmitted, error occurs when
◊ r is in D2
◊ Distance between the projection of r−s1 on s2−s1 from s1p j 1 2 1 1
is greater than d12/2, d12=|| s2−s1 ||
2( ) d d⎡ ⎤ ⎡ ⎤22 1 12 12
2 112
( ) = ( )2 2b
d dP P Pd
⎡ ⎤ ⎡ ⎤⋅ −= > ⋅ − >⎢ ⎥ ⎢ ⎥
⎣ ⎦⎣ ⎦
n ns s s s
34
2 11
12
is a unit vector. .d−
= −n r ss s
Error Probability for Error Probability for EquiprobableEquiprobable Binary Signaling Schemes (2)Binary Signaling Schemes (2)
◊ Since
yy q pq p y g g ( )y g g ( )
⎡ ⎤
22 1 12 0( ) ~ (0, / 2)N d N⋅ −n s s
⎛ ⎞
2 212 12/ 2d dQ Q
⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟
22 1 12= ( ) / 2bP P d⎡ ⎤⋅ − >⎣ ⎦n s s [ ]
[ ]( ) 2.3 12
mP X Q
mP X Q
αασ
αασ
−⎛ ⎞> = ⎜ ⎟⎝ ⎠ −
−⎛ ⎞< = ⎜ ⎟⎝ ⎠
◊ Since Q(x) is decreasing, min. error probability =max. d12
12 12
012 0 2/ 2Q Q
Nd N= = ⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠
σ⎝ ⎠
Q( ) g p y 12
◊
◊ When equiprobable signals have the same energy, Es1= Es2 =E( )22
12 1 2( ) ( )d s t s t dt∞
−∞= −∫
q p g gy, s1 s2
◊ −1≤ ρ ≤1 is cross-correlation coefficient
212 1 2 1 22 ( ), ( ) 2 (1 ) s sd E E s t s t E ρ= + − = −
ρ◊ d12 is maximized when ρ = −1 antipodal signals
35
Optimal Detection for Binary Orthogonal Signaling (1)Optimal Detection for Binary Orthogonal Signaling (1)p y g g g ( )p y g g g ( )
◊ For binary orthogonal signals,
Ch i i
( ) ( ) 1 , 2
0 b
i j
E i js t s t dt i j
i j∞
−∞
=⎧= ≤ ≤⎨ ≠⎩
∫( ) ( ) /t t Eφ◊ Choose , vector representation is
h i l i b bl (fi )
( ) ( ) /j j bt s t Eφ =
( ) ( )1 2,0 ; 0,b bE E= =s s◊ When signals are equiprobable (figure)
◊ Error probability : ( )2 bd E=
( ) ( )2 / 2 /P Q d N Q E N◊ Given the same Pb, binary orthogonal signals
i t i f ti d l i l
( ) ( )20 0/ 2 /b bP Q d N Q E N= =
requires twice energy of antipodal signals ◊ A binary orthogonal signaling requires twice
the energy per bit of a binary antipodal signaling system to provide
36
the energy per bit of a binary antipodal signaling system to provide the same error probability.
Optimal Detection for Binary Orthogonal Signaling (2)Optimal Detection for Binary Orthogonal Signaling (2)
◊ Figure: Error Probability v.s.
p y g g g ( )p y g g g ( )
SNR/bit for binary orthogonal and binary antipodal signaling systems.
◊ Signal-to-noise ratio (SNR) per bit:
/E N0/b bE Nγ =
37
Chapter 4.2Chapter 4.2--2 2 Implementation of the Optimal Receiver for Implementation of the Optimal Receiver for
AWGN ChannelsAWGN Channels
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
38
4.24.2--2 Implementation of the Optimal Receiver for AWGN Channels2 Implementation of the Optimal Receiver for AWGN Channelsp pp p
◊ Present different implementations of MAP receivers in AWGN channel◊ Correlation Receiver◊ Matched Filter Receiver
◊ All these structures are equivalent in performance and result in minimum error probability.
39
The Correlation Receiver (1)The Correlation Receiver (1)( )( )
◊ The MAP decision in AWGN channel1N
(from 4.2-17)
1) r is derived from
0
1 M
1ˆ [ ], where lnarg max2 2m m m m m
m
Nm P Eη η≤ ≤
= + ⋅ = −r s
1) r is derived from
1) Find the inner product of r and sm, 1≤ m ≤ M
( ) ( )j jr r t t dtφ∞
−∞= ∫ Correlation receiver
) p m,2) Add the bias term ηm
3) Choose m that maximize the result
40
The Correlation Receiver (2)The Correlation Receiver (2)( )( )
η ’s and s ’s can be computed once and stored in memoryηm s and sm s can be computed once and stored in memory
41
41
The Correlation Receiver (3)The Correlation Receiver (3)( )( )
◊ Another implementation
◊ Requires M correlators
0
1 M
1ˆ [ ( ) ( ) ], where lnarg max2 2m m m m m
m
Nm r t s t dt P Eη η∞
−∞≤ ≤= + = −∫
◊ Requires M correlators◊ Usually M > N
Less preferred◊ Less preferred
42
The Matched Filter Receiver (1)The Matched Filter Receiver (1)( )( )
◊ In both correlation receiver, we compute
◊ x(t) is φj(t) or sm(t)
( ) ( )xr r t x t dt∞
−∞= ∫
( ) φj( ) m( )◊ Define h(t)= x(T−t) for arbitrary T: filter matched to x(t)
◊ If r(t) is applied to h(t), the output y(t) is( ) pp ( ) p y( )( ) ( )* ( )= ( ) ( )
( ) ( )
y t r t h t r h t d
T t d
τ τ τ∞
−∞
∞
= −
+
∫∫
( ) ( )( ) ( )
( ) ( )( )
h t x T t
h x Tτ τ
= −
= −
◊
= ( ) ( )r x T t dτ τ τ−∞
− +∫( ) ( ) ( )xr y T r x dτ τ τ
∞= = ∫
( ) ( )( )( )
h t x T t
x T t
τ τ
τ
− = − −
= − +
◊ rx can be obtained by sampling the output of matched filter at t=T.
x −∞∫
43
The Matched Filter Receiver (2)The Matched Filter Receiver (2)( )( )
◊ A matched filter implementation of the optimal receiver
44
The Matched Filter Receiver (3)The Matched Filter Receiver (3)( )( )
Frequency Domain Interpretation:◊ Property I:
◊ Matched filter to signal s(t) is h(t)=s(T−t). The properties of Fourier f i * 2 fTtransform is * 2( ) ( ) j fTH f S f e π−=
Conj(signal spectrum) Sampling delay of T
◊ |H( f )|=|S( f )|◊ ∠H( f )= −∠S( f ) − 2π f T◊ ∠H( f ) ∠S( f ) 2π f T
45
The Matched Filter Receiver (4)The Matched Filter Receiver (4)( )( )
◊ Property II: signal-to-noise maximizing property◊ Assume that r(t) = s(t) + n(t) is passed through a filter h(t), and the output
y(t) ≡ yS(t)+v(t) is sampled at time TSignal Part: F{ (t)} H( f ) S( f )◊ Signal Part: F{ yS(t)}= H( f ) S( f )
⇒ 2( ) ( ) ( ) j f Tsy T H f S f e dfπ∞
−∞= ∫
◊ Zero-Mean Gaussian Noise: Sv( f ) = (N0 /2) |H( f )|2∞∫
2VAR[ ( )] ( / 2) ( ) ( / 2)T N H f df N E∞
∫⇒◊ Eh is the energy in h(t).
0 0VAR[ ( )] ( / 2) ( ) ( / 2) hv T N H f df N E−∞
= =∫
R l i h' Th
( ) ( )2 2
Rayleigh's Theorem:
xx t dt X f df E∞ ∞
−∞ −∞
= =∫ ∫
46
The Matched Filter Receiver (5)The Matched Filter Receiver (5)( )( )
◊ The SNR at the output of filter H( f ) is2
O( )SNR
VAR[ ( )]sy T
v T=
◊ From Cauchy-Schwartz inequality2
2 2( ) ( ) ( ) j fTy T H f S f e dfπ∞⎡ ⎤= ⎢ ⎥⎣ ⎦∫ ( ) ( )2 2
Rayleigh's Theorem:
xx t dt X f df E∞ ∞
= =∫ ∫
2 22
( ) ( ) ( )
( ) ( ) =
S
j fTh s
y T H f S f e df
H f df S f e df E Eπ
−∞
∞ ∞
−∞ −∞
= ⎢ ⎥⎣ ⎦
≤ ⋅
∫
∫ ∫
−∞ −∞
◊ Equality holds iff H( f ) =α S*( f ) e−j2π f T ,
◊O
2SNR s h s SE E E E≤ = =
Cα ∈
◊ The matched filter h(t)=s(T−t), i.e. H( f ) = S*( f ) e−j2π f T , i i SNR
O0 0 0
SNR( / 2) 2hN E N N
≤
47
maximizes SNR
MatchedMatched--FilterFilter◊ Time-Domain Property of the matched filter.
MatchedMatched FilterFilter
◊ If a signal s(t) is corrupted by AWGN, the filter with an impulse response matched to s(t) maximizes the output i l t i ti (SNR)signal-to-noise ratio (SNR).
◊ Proof: let us assume the receiver signal r(t) consists of the signal s(t) and AWGN n(t) which has zero mean andsignal s(t) and AWGN n(t) which has zero-mean and
W/Hz.◊ Suppose the signal r(t) is passed through a filter with
( ) 012nm f NΦ =
◊ Suppose the signal r(t) is passed through a filter with impulse response h(t), 0≤t≤T, and its output is sampled at time t=T. The output signal of the filter is:p g
0( ) ( ) ( ) ( ) ( )
t
t t
y t r t h t r h t dτ τ τ= ∗ = −∫∫ ∫
48
0 0 ( ) ( ) ( ) ( )
t ts h t d n h t dτ τ τ τ τ τ= − + −∫ ∫
MatchedMatched--FilterFilter
◊ Proof: (cont.)
MatchedMatched FilterFilter
( )◊ At the sampling instant t=T:
( ) ( ) ( ) ( ) ( )T T
y T s h T d n h T dτ τ τ τ τ τ= − + −∫ ∫
Thi bl i t l t th filt i l th t
0 0( ) ( ) ( ) ( ) ( )
( ) ( )s n
y
y T y T= +∫ ∫
◊ This problem is to select the filter impulse response that maximizes the output SNR0 defined as:
( )2y T
[ ]2 ( ) ( ) ( ) ( ) ( )T T
E T E t h T h T t dtd⎡ ⎤⎣ ⎦ ∫ ∫
( )( )0 2
SNR s
n
y TE y T
=⎡ ⎤⎣ ⎦
[ ]2
0 0
20 0
( ) ( ) ( ) ( ) ( )
1 1( ) ( ) ( ) ( )
n
T T T
E y T E n n t h T h T t dtd
N t h T h T t dtd N h T t dt
τ τ τ
δ τ τ τ
⎡ ⎤ = − −⎣ ⎦
= − − − = −
∫ ∫
∫ ∫ ∫49
0 00 0 0 ( ) ( ) ( ) ( )
2 2N t h T h T t dtd N h T t dtδ τ τ τ∫ ∫ ∫
MatchedMatched--FilterFilter◊ Proof: (cont.)
MatchedMatched FilterFilter
◊ By substituting for ys (T) and into SNR0.( )2nE y T⎡ ⎤⎣ ⎦
' Tτ τ= −2 2
0 0( ) ( ) ( ') ( ') '
SNR
T Ts h T d h s T dτ τ τ τ τ τ⎡ ⎤ ⎡ ⎤− −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦∫ ∫
02 2
0 00 0
SNR 1 1( ) ( )2 2
T TN h T t dt N h T t dt
⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦= =− −∫ ∫
◊ Denominator of the SNR depends on the energy in h(t).◊ The maximum output SNR over h(t) is obtained by p ( ) y
maximizing the numerator subject to the constraint that the denominator is held constant.
50
MatchedMatched--FilterFilter◊ Proof: (cont.)
MatchedMatched FilterFilter
◊ Cauchy-Schwarz inequality: if g1(t) and g2(t) are finite-energy signals, then
∫ ∫∫∞
∞−
∞
∞−
∞
∞−≤⎥⎦
⎤⎢⎣⎡ dttgdttgdttgtg )()()()( 2
221
2
21
with equality when g1(t)=Cg2(t) for any arbitrary constant C.
◊ If we set g1(t)=h1(t) and g2(t)=s(T−t), it is clear that the SNR is maximized when h(t)=Cs(T–t).
51
MatchedMatched--FilterFilter◊ Proof: (cont.)
MatchedMatched FilterFilter
◊ The output (maximum) SNR obtained with the matched filter is:
( )( )2 2
0 00
( ) ( ) ( )2SNR 1
T T
TT
s h T d s Cs T T d
N
τ τ τ τ τ τ⎡ ⎤ ⎡ ⎤− − −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦= =∫ ∫
∫ ( )( )0 2 22 0
0 00
1 ( )2
2 2
TT
T
N C s T T t dtN h T t dt
ε− −− ∫∫
∫( ) ( )h t s T t= −
2
00 0
2 2 ( )T
s t dtN N
ε= =∫
( ) ( )( ) ( )
( ) ( )( )
h t s T t
h s T
h T s T T
τ τ
τ τ
= −
− = − −
◊ Note that the output SNR from the matched filter depends on the energy of the waveform s(t) but not on the detailed h t i ti f (t)
52
characteristics of s(t).
The Matched Filter Receiver (6)The Matched Filter Receiver (6)( )( )
◊ Ex 4.2-1 M=4 biorthogonal signals are constructed by two signals in fig.(a) for transmission in AWGN channel. The noise is zero mean and PSD=N0/2
◊ Dimension N=2, and basis function1( ) 2 / , 0 / 2t T t Tφ = ≤ ≤
I l f t
2 ( ) 2 / , / 2t T T t Tφ = ≤ ≤
◊ Impulse response of two matched filters (fig (b))
1 1( ) ( ) 2 / , / 2h t T t T T t Tφ= − = ≤ ≤1 1
2 2
( ) ( ) ,
( ) ( ) 2 / , 0 / 2h t T t T t T
φ
φ= − = ≤ ≤
( ) ( ) ( )* ( ) ( )y t s t h t s h t dτ τ τ∞
= = −∫
53
( ) ( ) ( )
( ) ( )
( ) ( )y
s s T t dτ τ τ
−∞
∞
−∞= − +
∫∫
The Matched Filter Receiver (7)The Matched Filter Receiver (7)( )( )
◊ If s1(t) is transmitted, the noise-free responses of two matched filter(fig(c)) sampled at t=T arefilter(fig(c)) sampled at t=T are
◊ If s (t) is transmitted the received vector formed from two
21 2( ) / 2= ; ( ) 0s sy T A T E y T= =
◊ If s1(t) is transmitted, the received vector formed from two matched filter outputs at sampling instances t = T is
1 2 1 2( , ) ( , )r r r E n n= = +◊ Noise : n1=y1n(T) & n2=y2n(T)
1 2 1 2( , ) ( , )r r r E n n+
( ) ( ) ( ) , 1, 2T
k ky T n t t dt kφ= =∫a) E[nk ] = E[ ykn(T )] = 0b)
0( ) ( ) ( ) , 1, 2kn ky T n t t dt kφ∫
0 0VAR[ ] ( / 2) / 2kn N E Nφ= = ( ) ( )0VAR[ ( )] / 2 4.2 52hv T N E= −)
◊ SNR for the first matched filter0 0[ ] ( )
kk φ
2( ) 2SNR E E= =
( ) ( )0( ) h
54
o0 0
SNR/ 2N N
= =
Chapter 4.2Chapter 4.2--3 3 A U i B d P b bilitA U i B d P b bilitA Union Bound on Probability A Union Bound on Probability
of Errors of ML Detection of Errors of ML Detection
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
55
4.24.2--3 3 A Union Bound on Probability of Errors of ML Detection (1)A Union Bound on Probability of Errors of ML Detection (1)A Union Bound on Probability of Errors of ML Detection (1)A Union Bound on Probability of Errors of ML Detection (1)
◊ When signals are equiprobable, Pm=1/M, ML decision is optimal. h b bili bThe error probability becomes
|1 1= ( | )
M M
e e m mDP P p d
M M= ∑ ∑ ∑ ∫ r s r
◊ For AWGN channel,
'|
1 1 1 ' '
mDm m m M
m mM M= = ≤ ≤
≠
∑ ∑ ∑ ∫
' '|
1 ' 1 ' ' '
( | ) = ( )m m
e m m n mD Dm M m Mm m m m
P p d p d≤ ≤ ≤ ≤
≠ ≠
= −∑ ∑∫ ∫r s r r s r
2N
(4.2-63)
2
0
'1 '0'
1=m
m
N
N
Dm M
e dNπ
−−
≤ ≤≠
⎛ ⎞⎜ ⎟⎜ ⎟⎝ ⎠
∑ ∫r s
r
◊ For most constellation, the integrals does not have a close form◊ It’s convenient to have upper bounds for the error probability
'm m≠⎝ ⎠
56
◊ The union bound is the simplest and most widely used bound which is quite tight particularly at high SNR.
4.24.2--3 3 A Union Bound on Probability of Errors of ML Detection (2)A Union Bound on Probability of Errors of ML Detection (2)
◊ In general, the decision region Dm’ under ML detection is
A Union Bound on Probability of Errors of ML Detection (2)A Union Bound on Probability of Errors of ML Detection (2)
◊ Define' '{ : ( | ) ( | ), for all 1 and '}N
m m kD r R p r s p r s k M k m= ∈ > ≤ ≤ ≠
' '{ ( ) ( )}mm m mD p p= >r | s r | s◊ Decision region for m’ in a binary equiprobable signals sm & sm’
◊ ' 'm mmD D⊆ Pairwise Error ◊
' '
( | ) ( | )m mm
mD mDp r s drp r s dr ≤∫ ∫ Probability Pm m’
| ( | )e m mP p d= ∑ ∫ r s r(4.2-67)
'|
1 ' '
'
( | )
( | ) =
me m mD
m Mm m
p
p r s dr P
≤ ≤≠
→≤
∑ ∫
∑ ∑∫'
'1 ' 1 ' ' '
( | ) mm
m m mDm M m Mm m m m
p r s dr P →≤ ≤ ≤ ≤
≠ ≠
≤ ∑ ∑∫
1 1( | )M M
P d P≤ ∑ ∑ ∑ ∑∫57
''
1 1 ' 1 1 ' ' '
( | ) =mm
e m m mDm m M m m M
m m m m
P p r s dr PM M →
= ≤ ≤ = ≤ ≤≠ ≠
≤ ∑ ∑ ∑ ∑∫
4.24.2--3 3 A Union Bound on Probability of Errors of ML Detection (3)A Union Bound on Probability of Errors of ML Detection (3)
◊ In an AWGN channel
A Union Bound on Probability of Errors of ML Detection (3)A Union Bound on Probability of Errors of ML Detection (3)
( )◊ Pairwise probability :
◊
( )2' ' 0/(2 )m m b mmP P Q d N→ = =
2'1 M
mmdP Q⎛ ⎞
≤ ⎜ ⎟⎜ ⎟∑ ∑
(4.2-37)
◊
'
2'
11 ' 02
1
m m
mm
em m M
dM
P QM N
≠= ≤ ≤
−
≤ ⎜ ⎟⎜ ⎟⎝ ⎠
∑ ∑2 / 21( )
2xQ x e−≤
◊ Distance enumerator function for a constellation T(X):
0
'
4
11 '
1 2
m m
N
m m Me
M≠
= ≤ ≤
≤ ∑ ∑ Union bound for an AWGN channel.
◊ Distance enumerator function for a constellation T(X):2 2
'
|| || all distinct 's( ) mmd d
dd d
T X X a X= −
= =∑ ∑s s
◊ ad : # of ordered pairs (m,m’) s.t. m ≠ m’, and ||sm−sm’||=d
' '1 , ' , '
|| || all distinct smm m mm m M m m
d d≤ ≤ ≠
= −s s
58
4.24.2--3 3 A Union Bound on Probability of Errors of ML Detection (4)A Union Bound on Probability of Errors of ML Detection (4)A Union Bound on Probability of Errors of ML Detection (4)A Union Bound on Probability of Errors of ML Detection (4)
◊ Union bound:'
1
2
0412
1 ( )2
mmdMN
e T XeM M
P−
≤ =∑ ∑
◊ Minimum distance:'
4 011 '2 2 Nm m
m m X eMM M≠
−= ≤ =≤
∑ ∑
min '1 'min || ||m mM
d≤ ≤
= −s s
◊ Since Q(x) is decreasing1 , ' ,
'm m Mm m
≤ ≤≠
2 2d d⎛ ⎞ ⎛ ⎞
Th b bilit (l f f th i b d)
2 2' min
0 02 2mmd dQ QN N
⎛ ⎞ ⎛ ⎞≤⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟
⎝ ⎠⎝ ⎠◊ The error probability (looser form of the union bound)
2min
0
24min
2' 1( 1)1 M
mme
dNd MMdP Q Q e
−⎛ ⎞≤ ≤ ≤⎜ ⎟
⎜ ⎟
⎛ ⎞ −− ⎜ ⎟⎜ ⎟∑ ∑
◊ Good constellation provides maximum possible minimum distance
011 ' 0 '
( )22 2e
m m Mm m
NQ Q
M N= ≤ ≤≠
⎜ ⎟⎝ ⎠
⎜ ⎟⎝ ⎠
∑ ∑
59
4.24.2--3 3 A Union Bound on Probability of Errors of ML Detection (5)A Union Bound on Probability of Errors of ML Detection (5)
◊ Ex 4.2-2 Consider a 16-QAM constellation (M=16)
A Union Bound on Probability of Errors of ML Detection (5)A Union Bound on Probability of Errors of ML Detection (5)
◊ From Chap 3.2(equation3.2-44), the minimum distance is26log 8Md E E= =
◊ Total 16×15=240 possible distances
min bavg bavg1 5d E E
M= =
−
◊ Total 16×15 240 possible distances◊ Distance enumerator function :
2 2 2 2 22 4 5 8( ) 48 36 32 48 16d d d d dT X X X X X X= + + + +
◊ Upper bound of error probability
2 2 2 29 10 13 18
( ) 48 36 32 48 16
16 24 16 4d d d d
T X X X X X X
X X X X
= + + + +
+ + + +
◊ Upper bound of error probability0
141
32N
eP T e−⎛ ⎞
≤ ⎜ ⎟⎜ ⎟⎝ ⎠
60
⎝ ⎠
4.24.2--3 3 A Union Bound on Probability of Errors of ML Detection (6)A Union Bound on Probability of Errors of ML Detection (6)
◊
A Union Bound on Probability of Errors of ML Detection (6)A Union Bound on Probability of Errors of ML Detection (6)2min
0
bavg
042515
21
2
dN
e
ENeMP e
−−−≤ =
◊ When SNR is large ,22e
( )2
( ) 48 dT X X≈1⎛ ⎞
2i 2 bavgEd
0
141
32N
eP T e−⎛ ⎞
≤ ⎜ ⎟⎜ ⎟⎝ ⎠
min
0 04 532
4832
bavg
N Nd
e e− −
≈ =
◊ Exact error probability2
⎡ ⎤⎛ ⎞ ⎛ ⎞
( l 4 3 1)
bavg bavg
0 0
4 4935 4 5e
E EQ Q
N NP
⎡ ⎤⎛ ⎞ ⎛ ⎞⎢ ⎥−⎜ ⎟ ⎜ ⎟
⎜ ⎟ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎦=
⎠⎣(see example 4.3-1)
61
Lower Bound on Probability of Error (1)Lower Bound on Probability of Error (1)y ( )y ( )
◊ In an equiprobable M-ary signaling scheme
1 1M M 'C Cmm mD D⊆1 1
1 1P[Error | sent] ( )cm
M M
e mDm m
P m p drM M= =
= =∑ ∑∫ r | s ' 'm mmD D⊆
' '1 1
1 1( ) = ( )cm m mm
M M
m mD Dm m
M
p dr p drM M= =
≥
⎛ ⎞
∑ ∑∫ ∫r | s r | smm m
'
1 0
1= , for all '2
Mmm
m
dQ m mM N=
⎛ ⎞≠⎜ ⎟⎜ ⎟
⎝ ⎠∑
(Finding m’ such that d ’ is miniized )
◊ To derive the tightest lower bound maximize the right hand side'1 max
MmmdP Q
⎛ ⎞≥ ⎜ ⎟∑ min1 mM dQ
⎛ ⎞⎜ ⎟∑
(Finding m such that dmm’ is miniized.)
◊
'1 0
max 2mm
e m mmP Q
M N≠=
≥ ⎜ ⎟⎜ ⎟⎝ ⎠
∑ min
1 02mQ
M N=
= ⎜ ⎟⎜ ⎟⎝ ⎠
∑
min : distance from to its nearest neighbormd m
62
min g
Lower Bound on Probability of Error (2)Lower Bound on Probability of Error (2)y ( )y ( )
◊ min min md d≥∵ min min
min( denotes the distance from to its nearest neighbor in the constellation)md m
( )/ 2 At least one signal at distance d from sm Q d Nd ⎧⎛ ⎞ ⎪ ( )min 0 min mmin
0
/ 2 , At least one signal at distance d from s2 0, otherwise
Q d NdQN
⎧⎛ ⎞ ⎪≥⎜ ⎟ ⎨⎜ ⎟ ⎪⎝ ⎠ ⎩
⎛ ⎞◊ min
1 0' : || ||
12e
m Mm m d
dP QM N< <
∃ ≠ − =
⎛ ⎞≥ ⎜ ⎟⎜ ⎟
⎝ ⎠∑s s
◊ Nmin : number of points in the constellation s.t.' min: || ||m mm m d∃ ≠ − =s s
' : || ||m m d∃ ≠ =s s
◊ min min min( 1)2 2e
N d dQ P M QM N N
⎛ ⎞ ⎛ ⎞≤ ≤ −⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟
⎝ ⎠ ⎝ ⎠
' min: || ||m mm m d∃ ≠ − =s s
63
0 02 2M N N⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠
Chapter 4.3: Optimal Detection Chapter 4.3: Optimal Detection and Error Probability forand Error Probability forand Error Probability for and Error Probability for
BandlimitedBandlimited SignalingSignaling
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
64
Chapter 4.3Chapter 4.3--1 Optimal Detection & Error Prob.1 Optimal Detection & Error Prob.for ASK or PAM Signalingfor ASK or PAM Signaling
◊ In this section we study signaling schemes that are
for ASK or PAM Signalingfor ASK or PAM Signaling
y g gmainly characterized by their low bandwidth requirements.q
◊ These signaling schemes have low dimensionality which is independent from the number of transmitted signals,is independent from the number of transmitted signals, and, as we will see, their power efficiency decreases when the number of messages increases.when the number of messages increases.
◊ This family of signaling schemes includes ASK, PSK, and QAMand QAM.
65
Chapter 4.3Chapter 4.3--1 Optimal Detection & Error Prob.1 Optimal Detection & Error Prob.for ASK or PAM Signalingfor ASK or PAM Signalingfor ASK or PAM Signalingfor ASK or PAM Signaling
◊ For ASK signaling scheme:g g2
min 2
12log1 bavgMd E
M=
−(3.2-22)
◊ The constellation points: {±dmin/2, ±3dmin/2,…, ±(M−1)d /2 }1)dmin/2 }
◊ Two type of signal points◊ M−2 inner points: detection error occurs if |n| > dmin /2.◊ 2 outer points: error probability is half of inner points.
◊ Let Pei and Peo are error probabilities of inner and outer points
⎛ ⎞ ⎞⎛
66
minmin
0
1 22 2ei
dP P n d QN
⎛ ⎞⎡ ⎤= > = ⎜ ⎟⎢ ⎥ ⎜ ⎟⎣ ⎦ ⎝ ⎠⎟⎟⎠
⎞⎜⎜⎝
⎛==
0
min
221
Nd
QPP eieo
Chapter 4.3Chapter 4.3--1 Optimal Detection & Error Prob.1 Optimal Detection & Error Prob.for ASK or PAM Signalingfor ASK or PAM Signaling
◊ Symbol error probability
for ASK or PAM Signalingfor ASK or PAM Signaling
y p y
min min
1
1 1error sent 2( 2) 22 2
M
ed dP P m M Q Q
M M N N
⎡ ⎤⎛ ⎞ ⎛ ⎞= ⎡ ⎤ = − +⎢ ⎥⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎜ ⎟ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦
∑
( )
1 0 0
min
2 2
2 1
mM M N N
M dQ
=⎜ ⎟ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦
⎛ ⎞−= ⎜ ⎟⎜ ⎟ 12l M
02Q
M N⎜ ⎟⎜ ⎟⎝ ⎠
26log12 1 bavgEMQ⎛ ⎞⎛ ⎞= ⎜ ⎟⎜ ⎟ ⎜ ⎟
2min 2
12log1 bavgMd E
M=
−
20
2 11
6log b
QM M N
EM
= − ⎜ ⎟⎜ ⎟ ⎜ ⎟−⎝ ⎠ ⎝ ⎠⎛ ⎞⎜ ⎟ Doubling M2
20
6log2 for large 1
bavgEMQ MM N
≈ ⎜ ⎟⎜ ⎟−⎝ ⎠Decrease with M SNR/bit
Doubling Mincreasing rate by 1
bit/transmissionNeed 4 times SNR/bit
67
Need 4 times SNR/bit to keep performance
Chapter 4.3Chapter 4.3--1 Optimal Detection & Error Prob.1 Optimal Detection & Error Prob.for ASK or PAM Signalingfor ASK or PAM Signaling
◊ Symbol error probability for
for ASK or PAM Signalingfor ASK or PAM Signaling
ASK or PAM signaling◊ For large M, the distance
b t M d 2M ibetween M and 2M is roughly 6dB.
68
Chapter 4.3Chapter 4.3--2 Optimal Detection & Error Prob.2 Optimal Detection & Error Prob.for PSK Signalingfor PSK Signaling
◊ In M-ary PSK signaling, assume signals are
for PSK Signalingfor PSK Signaling
y g g, gequiprobable.
minimum distance decision is optimal.perror probability = error prob. when s1 is transmitted.
◊ When is transmitted, received signal is1 ( ,0)E=s , g
◊ and are indep.
1
1 2 1 2( , ) ( , )r r E n n= = +r
1 0~ ( , / 2)r N E N 2 0~ (0, / 2)r N N p1 0( , ) 2 0( , )2 2
1 21 2
0 0
( )1( , ) exp r E rp r rN Nπ
⎛ ⎞− += −⎜ ⎟⎜ ⎟
⎝ ⎠⎝ ⎠2 2 2
1 21
, arctan rV r rr
= + Θ =
2 2 θ⎛ ⎞
69
2
,0 0
2 cos( , ) expVv v E Evp vN N
θθπΘ
⎛ ⎞+ −= −⎜ ⎟⎜ ⎟
⎝ ⎠
Chapter 4.3Chapter 4.3--2 Optimal Detection & Error Prob.2 Optimal Detection & Error Prob.for PSK Signalingfor PSK Signaling
◊ Marginal PDF of Q is
for PSK Signalingfor PSK Signaling
2
,0( ) ( , )Vp p v dvθ θ
∞
Θ Θ= ∫2
0
2 cos
00
v E Ev
Nv e dvN
θ
π
+ −−∞
= ∫2
2( 2 cos )
sin 20
1 2
s
s
v
e ve dvγ θ
γ θ
π
−∞ −−= ∫
E◊ Symbol SNR:
0s N
γ =
70
◊ As gs increases, pQ(q ) is more peaked around q = 0.
Chapter 4.3Chapter 4.3--2 Optimal Detection & Error Prob.2 Optimal Detection & Error Prob.for PSK Signalingfor PSK Signaling
◊ Decision region: { }1 : / /D M Mθ π θ π= − < ≤
for PSK Signalingfor PSK Signaling
g◊ Error probability is
{ }1
∫M/π
◊ Not have a simple form except for M=2 or 4
∫− Θ−=M
Me dpP/
/)(1
π
πθθ
◊ Not have a simple form except for M=2 or 4.◊ When M=2 binary antipodal signaling
( )◊ When M=4 two binary phase modulation signals
( )02 /b bP Q E N=
◊ When M=4, two binary phase modulation signals
( ) 22
0(1 ) 1 2 /c b bP P Q E N⎡ ⎤= − = −⎣ ⎦
( ) ( )1⎡ ⎤
71
( ) ( )0 011 2 2 / 1 2 /2e c b bP P Q E N Q E N⎡ ⎤= − = −⎢ ⎥⎣ ⎦
Chapter 4.3Chapter 4.3--2 Optimal Detection & Error Prob.2 Optimal Detection & Error Prob.for PSK Signalingfor PSK Signalingfor PSK Signalingfor PSK Signaling
◊ Symbol error probability of M-PSK◊ M ↑, Required SNR ↑
72
Chapter 4.3Chapter 4.3--2 Optimal Detection & Error Prob.2 Optimal Detection & Error Prob.for PSK Signalingfor PSK Signaling
◊ For large SNR (E/N0>> 1) , pΘ(θ) is approximated
for PSK Signalingfor PSK Signaling
g ( 0 ) , pΘ( ) pp2sin( ) / cos , for | | / 2s
sp e γ θθ γ π θ θ π−Θ ≈ ≤
◊ Error probability is approximated by2/ sin1 / cos s
M
e sP e dπ γ θγ π θ θ−≈ − ∫
2
/
sin( / )
2 s
e sM
u
Me du
π
γ π
γ
π
−
∞ −≈
∫
∫sinsu γ θ=
0/s E Nγ =
22
0
2 2 sin 2 (2 log )sin bs
EQ Q MM M N
π
π πγ⎛ ⎞⎛ ⎞⎛ ⎞ ⎛ ⎞= = ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠
◊
0M M N⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 2 2log logb sE E
N N M Mγ
= = When M=2 or 4:
( )73
0 0 2 2log logN N M M ( )02 2 /e bP Q E N≈
Chapter 4.3Chapter 4.3--2 Optimal Detection & Error Prob.2 Optimal Detection & Error Prob.for PSK Signalingfor PSK Signaling
◊ For large M and large SNR:
for PSK Signalingfor PSK Signaling
g g◊ sin(π/M) ~ π/M◊ Error probability is approximated by
22
2
2 log2 for large Mbe
EMP QM N
π⎛ ⎞≈ ⎜ ⎟⎜ ⎟
⎝ ⎠
◊ For large M doubling M reduces effective SNR by 6 dB
0M N⎜ ⎟⎝ ⎠
◊ For large M, doubling M reduces effective SNR by 6 dB.
◊ When Gray codes is used in the mapping◊ Since the most probable errors occur in the erroreous selection of an◊ Since the most probable errors occur in the erroreous selection of an
adjacent phase to the true phase
PP 1≈
74
eb Pk
P ≈
Differentially Encoded PSK SignalingDifferentially Encoded PSK Signaling
◊ In practice, the carrier phase is extracted from the p , preceived signal by performing nonlinear operation ⇒phase ambiguityp g y◊ For BPSK
• The received signal is first squared.• The resulting double-frequency component is filtered.• The signal is divided by 2 in frequency to extract an estimate of the
carrier frequency and phase φcarrier frequency and phase φ.• This operation result in a phase ambiguity of 180°in the carrier phase.
◊ For QPSK there are phase ambiguities of ±90° and 180° in◊ For QPSK, there are phase ambiguities of ±90 and 180 in the phase estimate.
◊ Consequently, we do not have an absolute estimate of the
75
q y,carrier phase for demodulation.
Differentially Encoded PSK SignalingDifferentially Encoded PSK Signaling
◊ The phase ambiguity can be overcome by encoding the p g y y ginformation in phase differences between successive signals.◊ In BPSK
◊ Bit 1 is transmitted by shifting the phase of carrier by 180°.◊ Bit 0 is transmitted by a zero phase shift.
I QPSK h h hif 0 90° 180° d 90° di◊ In QPSK, the phase shifts are 0, 90°, 180°, and -90°, corresponding to bits 00, 01, 11, and 10, respectively.
The PSK signals resulting from the encoding process are◊ The PSK signals resulting from the encoding process are differentially encoded.
h d i i l h h◊ The detector is a simple phase comparator that compares the phase of the demodulated signal over two
i i l h i f i76
consecutive intervals to extract the information.
Differentially Encoded PSK SignalingDifferentially Encoded PSK Signaling
◊ Coherent demodulation of differentially encoded PSK results in a higher error probability than that derived for absolute phase encoding.
◊ With differentially encoded PSK, an error in the demodulated phase of the signal in any given interval
ll lt i d di t tiusually results in decoding errors over two consecutive signaling intervals.Th b bilit i diff i ll d d M◊ The error probability in differentially encoded M-aryPSK is approximately twice the error probability of M-ary PSK with absolute phase encodingary PSK with absolute phase encoding.◊ Only a relative small loss in SNR.
77
Chapter 4.3Chapter 4.3--3 Optimal Detection & Error Prob.3 Optimal Detection & Error Prob.for QAM Signalingfor QAM Signaling
◊ In detection of QAM signals, need two filter matched to
for QAM Signalingfor QAM Signaling
Q g ,1( ) 2 / ( ) cos 2
( ) 2 / ( )sin 2
g ct E g t f t
t E g t f t
φ π
φ π
=
=
◊ Output of matched filters r =( r1, r2)
2 ( ) 2 / ( )sin 2g ct E g t f tφ π= −
◊ Compute◊ Select
( , ) 2m m mC E•= −r s r s
1ˆ arg max ( , )mm Mm C
≤ ≤= r s
(See 4.2-28)
◊ To determine Pe must specify signal constellation◊ For M=4, fig (a) and (b) are possible constellations., g ( ) ( ) p◊ Assume both have dmin =2A
•22avgE A=2r A=
2 2 21 ⎡ ⎤
78
•2 2 21 2(3 ) 2 2
4avgE A A A⎡ ⎤= + =⎣ ⎦1 2, 3A A A A= =
Chapter 4.3Chapter 4.3--3 Optimal Detection & Error Prob.3 Optimal Detection & Error Prob.for QAM Signalingfor QAM Signaling
◊ When M=8, four possible constellations in fig (a)~(d).
for QAM Signalingfor QAM Signaling
, p g ( ) ( )◊ Signal points (Amc, Ams)◊ All have dmin =2A◊ Average energy
2 21 ( )M
avg mc msE A A= +∑1
22 2
1
( )
( )
avg mc msm
M
mc ms
MA a aM
=
= +
∑
∑
◊ (a) and (C): Eavg=6A2
◊ (b): E =6 83A2
1mM =
◊ (b): Eavg=6.83A◊ (d): Eavg=4.73A2
◊ (d) require less energy
79
Chapter 4.3Chapter 4.3--3 Optimal Detection & Error Prob.3 Optimal Detection & Error Prob.for QAM Signalingfor QAM Signaling
◊ Rectangular QAM
for QAM Signalingfor QAM Signaling
◊ Generated by two PAM signals in I-phase and Q-phase carriers.◊ Easily demodulated.◊ For M≧16, only requires energy slight greater than that of the
best 16-QAM constellation.◊ When k is even, the constellation is square, the minimum
distance26log Md E=
◊ Can be considered as two -ary PAM constellations.◊ An error occurs if either n or n is large enough to cause an error
min 1 bavgd EM
=−M
◊ An error occurs if either n1 or n2 is large enough to cause an error.◊ Probability of correct decision is
( )2
80
( )22
, , ,1c M QAM c M PAM e M PAM
P P P− − −= = −
Chapter 4.3Chapter 4.3--3 Optimal Detection & Error Prob.3 Optimal Detection & Error Prob.for QAM Signalingfor QAM Signaling
◊ Probability of errors of square M-QAM
for QAM Signalingfor QAM Signaling
⎛ ⎞
Th b bilit f PAM i (f (4 3 4) & (4 3 5) )
2, ,1 (1 )e M QAM e M PAMP P− −
= − −, ,
12 12e M PAM e M PAM
P P− −
⎛ ⎞= −⎜ ⎟⎝ ⎠
◊ The error probability of PAM is (from (4.3-4) & (4.3-5) )
min12 1M PAM
dP Q⎛ ⎞⎛ ⎞= − ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠
23log12 11
bavgEMQM N
⎛ ⎞⎛ ⎞= − ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠
◊ Thus, the error probability of square M-QAM is
,02e M PAM
QM N− ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ 01
QM NM⎜ ⎟ ⎜ ⎟−⎝ ⎠ ⎝ ⎠
≦1
2 2,
0 0
3log 3log1 14 1 1 11 1
bavg bavge M QAM
E EM MP Q QM N M NM M−
⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞⎜ ⎟= − × − −⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟− −⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠⎝ ⎠
≦1≦1
2
0
3log 41
bavgMQM N
ε
⎝ ⎠ ⎝ ⎠⎝ ⎠⎛ ⎞
≤ ⎜ ⎟⎜ ⎟−⎝ ⎠The upper bound is it ti ht f l M
81
0⎝ ⎠ quite tight for large M
Chapter 4.3Chapter 4.3--3 Optimal Detection & Error Prob.3 Optimal Detection & Error Prob.for QAM Signalingfor QAM Signaling
◊ The penalty of increasing
for QAM Signalingfor QAM Signaling
transmission rate is 3dB/bitfor QAM.
◊ The penalty of increasing transmission rate is 6dB/bit for PAM and PSKfor PAM and PSK.
◊ QAM is more power efficient compared with PAM andcompared with PAM and PSK.
◊ The advantage of PSK is its gconstant-envelope properties.
◊ More comparisons are shown
82
pin the text (page 200).
Chapter 4.3Chapter 4.3--4 Demodulation and Detection4 Demodulation and Detection
◊ ASK, PSK and QAM have one- or two-dimensional , Qconstellation ◊ Basis functions of PSK and QAM:
1( ) 2 / ( ) cos 2
( ) 2 / ( )sin 2
g ct E g t f t
t E g t f t
φ π
φ π
=
◊ Basis functions of PAM:2 ( ) 2 / ( )sin 2g ct E g t f tφ π= −
◊ r(t) and basis functions are bandpass high sampling rate
1( ) 2 / ( ) cos 2g ct E g t f tφ π=
◊ To relieve the requirement on sampling rateFirst, demodulate signals to obtain a lowpass equivalent signals.
83
Then, perform signal detection.
Chapter 4.3Chapter 4.3--4 Demodulation and Detection4 Demodulation and Detection
◊ From chap 2.1 (2.1-21 and 2.1-24):p ( )/ 2
lx xE E= { }( ), ( ) Re ( ), ( ) / 2l lx t y t x t y t=
◊ Optimal detection rule (MAP) becomes0 1ˆ arg max lnm m m
Nm P E⎛ ⎞= ⋅ + −⎜ ⎟⎝ ⎠r s
1g
2 2m m mm M≤ ≤
⎜ ⎟⎝ ⎠
( )01
arg max Re[ ] ln / 2l ml m mlm M
N P E≤ ≤
= ⋅ + −r s1 m M≤ ≤
20
1
1arg max Re ( ) ( ) ln ( )2l ml m ml
m Mr t s t dt N P s t dt
∞ ∞∗
−∞ −∞≤ ≤
⎛ ⎞⎡ ⎤= + −⎜ ⎟⎢ ⎥⎣ ⎦⎝ ⎠∫ ∫
◊ ML decision rule is⎟⎞
⎜⎛ ⎤⎡ ∫∫
∞∗∞dd 2)(1)()(Rˆ
84
⎟⎠⎞
⎜⎝⎛ −⎥⎦
⎤⎢⎣⎡= ∫∫ ∞−
∗
∞−≤≤dttsdttstrm mlmll
Mm
2
1)(
21)()(Remaxargˆ
Chapter 4.3Chapter 4.3--4 Demodulation and Detection4 Demodulation and Detection
Complex matched filter.p
Detailed structure of aDetailed structure of acomplex matched filterin terms of its in-phaseand quadratureand quadraturecomponents..
◊ Throughout this discussion we have assumed that the receiver
85
◊ Throughout this discussion we have assumed that the receiver has complete knowledge of the carrier frequency and phase.
Ch 4 4 O i l D i dChaper 4.4:Optimal Detection and Error Probability for Power-
Limited Signaling
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
86
ChaperChaper 4.44.4--1 Optimal Detection & Error Prob. 1 Optimal Detection & Error Prob. for Orthogonal Signalingfor Orthogonal Signalingfor Orthogonal Signaling for Orthogonal Signaling
◊ In an equal-energy orthogonal signaling scheme, N=M and
1 ( ,0,...,0)
(0 0)
E
E
=s
2 (0, ,...,0)
(0 0 )
E
E
==
s
s
◊ For equiprobable equal-energy orthogonal signals, optimum d t t l t l ti b t d
(0,0,..., )M E=s
detector= largest cross-correlation between r and sm
1ˆ arg max m
m Mm
≤ ≤= ⋅r s
∵ Constellation is symmetric & distance between signal points is ∴ Error probability is independent of the transmitted signal
1 m M≤ ≤
2E
87
∴ Error probability is independent of the transmitted signal
ChaperChaper 4.44.4--1 Optimal Detection & Error Prob. 1 Optimal Detection & Error Prob. for Orthogonal Signalingfor Orthogonal Signalingfor Orthogonal Signaling for Orthogonal Signaling
◊ Suppose that s1 is transmitted, the received vector is
◊ E is symbol energy ( ) are iid Gaussian r vs with
1 2( , ,..., )ME n n n= +r
2 / 2N◊ (n1, n2,…, nM) are iid zero-mean Gaussian r.vs with ◊ Define random variables
( )1 1
1 2
r s
( , ,..., ) ,0 ,0M
R
E n n n E
= ⋅
= + ⋅
20 / 2n Nσ =
, 1m mR m M= ⋅ ≤ ≤r s( )1 2
1
( , , , ) , ,
, 2
M
m m
E En
R En m M
= +
= ≤ ≤
◊ A correct decision is made if R1 > Rm for m=2,3,…,M
1 2 1 3 1 1P[ , ,..., | sent]c MP R R R R R R= > > > s
1 2 1 3 1 1 P[ , ,..., | sent]ME n n E n n E n n= + > + > + > s
12 3 1 1P[ , ,..., | sent, ] ( ) M nn n E n n E n n E n n p n dn∞
= < + < + < + = ⋅∫ sBaye’s
Theorem
88( )
1
1
2 3 1 1
1
2 1 1P[ | sent, ] ( )
M n
M
nn n E n n p n dn
−∞
−∞
−∞= < + =
∫
∫ sn2, n3,…,nM are iid
ChaperChaper 4.44.4--1 Optimal Detection & Error Prob. 1 Optimal Detection & Error Prob. for Orthogonal Signalingfor Orthogonal Signalingfor Orthogonal Signaling for Orthogonal Signaling
◊ Since n2~N(0,N0/2)2 ( , 0 )
2 1 10
P[ | sent, ] 1/ 2
n En n E n n QN
⎛ ⎞+< + = = − ⎜ ⎟⎜ ⎟
⎝ ⎠s
◊2
0
1
0 0
1 1/ 2
M nN
cn EP Q e dn
N Nπ
−−∞
−∞
⎡ ⎤⎛ ⎞+= −⎢ ⎥⎜ ⎟⎜ ⎟⎢ ⎥⎝ ⎠⎣ ⎦
∫ n Ex +=
( )202 /
1 21 (1 ( ))2
x E NMQ x e dx
π
−∞ −−
−∞
⎝ ⎠⎣ ⎦
= −∫( )2
0 / 2x
N
2
◊
2π ( )202 /
1 211 1 (1 ( ))2
x E NM
e cP P Q x e dxπ
−∞ −−
−∞⎡ ⎤= − = − −⎣ ⎦∫
( )202 /
21 12
x E N
e dxπ
−∞ −
−∞=∫
1P[ | ] , 2 .1 2 1
e em k
P Preceived sent m MM
= = ≤ ≤s s
89
1[ | ] ,1 2 1m kM − −
ChaperChaper 4.44.4--1 Optimal Detection & Error Prob. 1 Optimal Detection & Error Prob. for Orthogonal Signalingfor Orthogonal Signalingfor Orthogonal Signaling for Orthogonal Signaling
◊ Assume that s1 corresponds to data◊ Assume that s1 corresponds to data sequence of length k and first bit=0◊ Probability that first bit is detected as 1 = y
prob. of detecting as {sm: first bit =1}1
1 2 12k
k ePP P P−
− ≈
◊ Last approximation holds for k>>1
22 1 2 1 2b e ek kP P P= = ≈
− −
◊ Fig: Prob. of bit error v.s. SNR per bit◊ Increasing M, required SNR is reduced
in contrast with ASK, PSK and QAM
90
Error Probability in FSK signalingError Probability in FSK signaling
◊ FSK signaling is a special case of orthogonal signaling when
, for any positive integer 2lf lT
Δ =
◊ In binary FSK, a frequency separation that guarantees orthogonality does not minimize the error probability.orthogonality does not minimize the error probability.
◊ For binary FSK the error probability is minimized when (see◊ For binary FSK, the error probability is minimized when (see Problem 4.18)
0.715fΔ =fT
91
A Union Bound on the Probability of Error in A Union Bound on the Probability of Error in Orthogonal SignalingOrthogonal SignalingOrthogonal Signaling Orthogonal Signaling ◊ From Sec.4.2-3, the union bound in AWGN channel is
2min1
dM
◊ In orthogonal signaling
0412
Ne
MP e−−
≤
2d E=◊ In orthogonal signaling,
0 02 212
E EN N
eMP e Me
− −−≤ <
min 2d E=
◊ Using M=2k and Eb=E/k,
2
⎛ ⎞
002ln 2
222bb EkkE
NNkeP e e
⎛ ⎞− −− ⎜ ⎟
⎝ ⎠< =
◊ IfIf SNR per bit > 1.42 dB, reliable communication is possible (Sufficient, but
not necessary)
0/ 2 ln 2 1.39 (1.42dB) as 0b eE N P k> = ⇒ → → ∞
92
y)
A Union Bound on the Probability of Error in Orthogonal A Union Bound on the Probability of Error in Orthogonal SignalingSignalingSignaling Signaling
◊ A necessary and sufficient condition for reliable◊ A necessary and sufficient condition for reliable communications is
ln 2 0.693 ( 1.6dB)bEN
> = −
◊ The -1.6 dB bound is obtained from a tighter bound on error probability
0N
probability( )
( )
0
2
( / 2) / 2ln 20
/ l 2
, / 4 ln 2bk E Nb
e k E N
e E NP
− −⎧ >⎪≤ ⎨⎪
◊ The minimum value of SNR per bit needed, i.e., -1.6 dB is
( )0/ ln 202 , ln 2 / 4 ln 2b
e k E Nbe E N− −
⎨⎪ ≤ ≤⎩
Shannon Limit.
93
ChaperChaper 4.44.4--2 Optimal Detection & Error Prob. for 2 Optimal Detection & Error Prob. for BiBi--orthogonal Signalingorthogonal SignalingBiBi orthogonal Signalingorthogonal Signaling
◊ A set of M=2k biorthogonal signals comes from N=M/2◊ A set of M 2 biorthogonal signals comes from N M/2 orthogonal signals by including the negatives of these signals◊ Requires only M/2 cross-correlators or matched filtersq y
◊ Vector representation of biorthogonal signals1 1 ( ,0,...,0)N E+= − =s s
2 2 (0, ,...,0)
N E+= − ==
s s
◊ Assume that s1 is transmitted, the received signal vector is2 (0,0,..., )N N E= − =s s
( )E◊ {nm} are zero-mean, iid Gaussian r.vs with
1 2( , ,..., )NE n n n= +r2
0 / 2n Nσ =
9494
ChaperChaper 4.44.4--2 Optimal Detection & Error Prob. for 2 Optimal Detection & Error Prob. for BiBi--orthogonal Signalingorthogonal SignalingBiBi orthogonal Signalingorthogonal Signaling
◊ Since all signals are equiprobable and have equal energy the◊ Since all signals are equiprobable and have equal energy, the optimum detector decides m with the larges magnitude of
( ), , 1 / 2m mC m M= ⋅ ≤ ≤r s r s◊ The sign is to decide whether sm(t) or –sm(t) is transmitted
◊
( ), ,m m
[ ] 1 1 1P correct decision P[ 0, | | | |, 2,3,..., ]m mr E n r r n m M= = + > > = =
◊ But, [ ]1
2 21 00
11
0
2/ / 21 1
0 2
1 1P | | | 02
rr Nx N x
rm rN
n r r e dx e dxNπ π
− −
− −< > = =∫ ∫
[ ] 1 1 1[ | | | | ]m m
◊ Probability of correct decision is1
20
( / 2) 1
2 / 21= ( )
MrN xP e dx p r dr
−
∞ −⎛ ⎞⎜ ⎟∫ ∫
1 0~ ( , / 2)r N E N0
1
0
1 102
= ( )2 rc
N
P e dx p r drπ −
⎜ ⎟⎜ ⎟⎝ ⎠
∫ ∫
2 20
( / 2) 12 / / 2 / 21 1
Mv E N x vd d
−∞ + − −⎛ ⎞
⎜ ⎟∫ ∫
1 02 /v r N=
95
0 0
/ 2 / 2
2 / ( 2 / )=
2 2x v
E N v E Ne dx e dv
π π− − +⎜ ⎟⎝ ⎠∫ ∫
ChaperChaper 4.44.4--2 Optimal Detection & Error Prob. for 2 Optimal Detection & Error Prob. for BiBi--orthogonal Signalingorthogonal SignalingBiBi orthogonal Signaling orthogonal Signaling
◊ Symbol error Probability Pe=1−Pce c
Fi P E /N◊ Fig: Pe v.s. Eb/N0◊ E=k Eb
Shannon Limit
96
Limit
96
ChaperChaper 4.44.4--3 Optimal Detection & Error Prob. for 3 Optimal Detection & Error Prob. for Simplex SignalingSimplex SignalingSimplex SignalingSimplex Signaling
◊ Simplex signals are obtained from shifting a set of orthogonal p g g gsignals by the average of these orthogonal signals
Geometry of simples signals is exactly the same as that of original orthogonal signals
◊ The error probability equals to the original orthogonal signals◊ Since simplex signals have lower energy, the energy in the
expression of error probability should be scaled, i.e.,2
⎛ ⎞
A l i i f 10 l ( / 1) d h l i li
0
2 / 21111 1 (1 ( ))
2
M ExM NM
e cP P Q x e dxπ
⎛ ⎞− −⎜ ⎟∞ ⎜ ⎟−− ⎝ ⎠
−∞⎡ ⎤= − = − −⎣ ⎦∫
◊ A relative gain of 10 log(M/M−1) dB over orthogonal signaling◊ M=2 3dB gain; M=10, 0.46 dB gain
97
Ch 4 5 O i l D i iChap 4.5 :Optimal Detection in Presence of Uncertainty: Non-
coherent Detection
Wireless Information Transmission System Lab.Wireless Information Transmission System Lab.Institute of Communications EngineeringInstitute of Communications Engineeringg gg gNational Sun National Sun YatYat--sensen UniversityUniversity
98
ChaperChaper 4.5 Optimal Detection in Presence of Uncertainty:4.5 Optimal Detection in Presence of Uncertainty:NonNon--coherent Detectioncoherent DetectionNonNon coherent Detection coherent Detection
◊ Previous sections assume that signal{sm(t)} or orthonormal basis {φj(t)} g { m( )} {φj( )}are available
◊ In many cases the assumption is not valid◊ Transmission over channel introduces a random attenuation or a random
phase shift to the signal◊ Imperfect knowledge of signals at rx when the tx and rx are not perfectly◊ Imperfect knowledge of signals at rx when the tx and rx are not perfectly
synchronizedAlthough the tx knows {sm(t)}, due to asynchronism, it can only use {sm(t−
t )}; t is random time slip between the tx and rx clocktd)}; td is random time slip between the tx and rx clock◊ Consider transmission over AWGN channel with random parameter(θ)
( ) ( ; ) ( )mr t s t n tθ= +◊ By K-L expansion theorem (2.8-2), we can find an orthornaormal basis
m
,m θ= +r s n
99
,
ChaperChaper 4.5 Optimal Detection in Presence of Uncertainty: 4.5 Optimal Detection in Presence of Uncertainty: NonNon--coherent Detectioncoherent DetectionNonNon coherent Detection coherent Detection
◊ The optimal (MAP) detection rule is (see 4.2-15)
1ˆ arg max ( | )
arg max ( | , ) ( )
mm M
m
m P p m
P p m p d≤ ≤
=
= ∫
r
r θ θ θ1
,1
g ( | , ) ( )
arg max ( ) ( )
mm M
m mm M
p p
P p p d≤ ≤
≤ ≤= −
∫
∫ n r s θ
θ θ θ
θ θ
◊ The decision rule determines the decision regions◊ The minimum error probability is
( )1
(r | , ) ( )cm
M
e m Dm
P P p m p d drθ θ θ−
= ∑ ∫ ∫
( )'
,1 ' 1
'
(r ) ( ) (4.5-3)m
M M
m n mDm m
m m
P p s p d drθ θ θ− =
≠
= −∑ ∑ ∫ ∫
100
ChaperChaper 4.5 Optimal Detection in Presence of Uncertainty:4.5 Optimal Detection in Presence of Uncertainty:NonNon--coherent Detectioncoherent DetectionNonNon coherent Detection coherent Detection
◊ (ex) Consider a binary antipodal signaling system w. equiprobable( ) y p g g y q psignals s1(t)=s(t) & s2(t)=−s(t) in an AWGN channel w. noise PSD N0/2◊ The channel is modeled as
( ) ( ) ( )r t As t n t= +
◊ A>0: random gain with PDF p(A)◊ A<0: random gain with PDF p(A)=0
( ) ( ) ( )mr t As t n t= +
g p( )◊ p(r | m, A) = pn( r − A sm )◊ Optimal decision region for s1(t) is
{ }2 20 0( ) / ( ) /
1 0 0: ( ) ( )b br A E N r A E ND r e p A dA e p A dA
∞ ∞− − − += >∫ ∫
( ){ }2 2 / 2 // A E N A E NA E N∞
∫ ( ){ }20 00 2 / 2 //
0: ( ) 0b bb rA E N rA E NA E Nr e e e p A dA
∞ −−= − >∫{ }: 0r r= >
∵ A>0∴0 02 / 2 /
4 /
>0b brA E N rA E N
A E N
e e−− 0 02 / 2 /( )>0b brA E N rA E Ne e−−
101
{ }: 0r r= > 04 /
0
= >1
=>4 / 00
brA E N
b
e
rA E Nr
>
=> >
ChaperChaper 4.5 Optimal Detection in Presence of Uncertainty: 4.5 Optimal Detection in Presence of Uncertainty: NonNon--coherent Detectioncoherent DetectionNonNon coherent Detection coherent Detection
◊ The error probability is 2
0
( )
0 00
1 ( )br A E
NbP e dr p A dA
Nπ
+−∞ ∞⎛ ⎞
⎜ ⎟=⎜ ⎟⎝ ⎠
∫ ∫0
00
( , ) 0 ( )2b
N
NP N A E p A dA
π
∞
⎜ ⎟⎝ ⎠⎛ ⎞⎡ ⎤= − >⎜ ⎟⎢ ⎥⎣ ⎦⎝ ⎠
∫0
0
2
(0,1) ( )/ 2bA E
P N p A dAN
∞
⎣ ⎦⎝ ⎠⎛ ⎞⎡ ⎤⎜ ⎟= >⎢ ⎥⎜ ⎟⎢ ⎥⎣ ⎦⎝ ⎠
∫
( )0
0
00
/ 2
2 / ( )b
N
Q A E N p A dA∞
⎜ ⎟⎢ ⎥⎣ ⎦⎝ ⎠
=
∫
∫( )0 E 2 /bQ A E N⎡ ⎤= ⎣ ⎦
102
ChaperChaper 4.54.5--1 1 NoncoherentNoncoherent Detection of Carrier Detection of Carrier Modulated SignalsModulated SignalsModulated Signals Modulated Signals
◊ For carrier modulated signals, {sm(t)} are bandpass with lowpass equivalents sml(t)
2( ) Re ( ) cj f tm mls t s t e π⎡ ⎤= ⎣ ⎦
◊ In AWGN channel,
t d ti h i b t t d
⎣ ⎦
( ) ( ) ( )m dr t s t t n t= − +◊ td : random time asynchronism between tx and rx
◊ 2 ( )( ) Re ( ) ( )c dj f t tml dr t s t t e n tπ −⎡ ⎤= − +⎣ ⎦
⎡ ⎤
◊ Lowpass equivalent of sm(t − td) is
2 2 Re ( ) ( )c d cj f t j f tml ds t t e e n tπ π−⎡ ⎤= − +⎣ ⎦
2( ) c dj f tml ds t t e π−−
◊ In practice, td <<TS ⇒ sml (t − td) ≈ sml (t)◊ The random phase shift φ =2πfctd could be large since fc is large
103
Noncoherent Detection
ChaperChaper 4.54.5--1 1 NoncoherentNoncoherent Detection of Carrier Detection of Carrier Modulated SignalsModulated SignalsModulated Signals Modulated Signals
◊ In the noncoherent case,⎡ ⎤ ⎡ ⎤
◊ The baseband channel model:
2 2Re ( ) Re ( ( ) ( ))c cj f t j f tjl ml lr t e e s t n t eπ πφ⎡ ⎤ ⎡ ⎤= +⎣ ⎦ ⎣ ⎦
( ) ( ) ( )jt t tφ
◊ Vector equivalent form:
( ) ( ) ( )jl ml lr t e s t n tφ= +
jφ
◊ The optimum noncoherent detection is
jl ml le φ= +r s n
2
01ˆ arg max ( )
2 l
jml ml
m M
Pm p e dπ φ φ
π≤ ≤= −∫ n r s
21PFrom (2.9 13)
( 2 )N N−
n 0 IF (4 5 3)
( )
202 /(4 )
010
1arg max 2 4
jl mle Nm
Nm M
P e dN
φπφ
π π− −
≤ ≤= ∫
r s 1 0~ ( , 2 )l N NN N×n 0 IFrom (4.5-3)
104
ChaperChaper 4.54.5--1 1 NoncoherentNoncoherent Detection of Carrier Detection of Carrier Modulated SignalsModulated SignalsModulated Signals Modulated Signals
◊( )
202 /(4 )1ˆ arg max
2
jl mle Nm
N
Pm e dφπ
φ− −= ∫
r s 22 2
cos 2
cos 2 dt=2
c
m c
f t
E f t
π
π
=
→ = ∫
m ml
mlml
s s
ss
( )
0 0
010
1 Re[ ]2 22
g2 4
arg maxjm
l mlE e
Nm M
Nm N
N
P e e dφ
π
φπ π
φ
≤ ≤
− ⋅
=
∫
∫r s { }
{ }
2 2
2
2
2
2
2 Re
2
2 R 2
j j
m
jl
j
e e e
E
E
φ φ φ
φ
→
− = − ⋅ +
=
∫
ml l l
m
ml ml
l
r s r r
s
s s
0 0
01
1 Re[ ]22 2( )
arg max 2
= arg maxm j
l ml
m MEN Nm
e
e e d
P e e dφ
π
φπ
φ−
≤ ≤
− ⋅
= ∫
∫r s
{ }2 = 2 Re 2jl me Eφ− ⋅ +l mlr sr
( ) ( )H H-=j j j
l e e eφ φ φ⋅ = ⋅ ⋅ml ml mlr s s rr s
00
01
12 2
Re[|
arg max 2
= arg maxm
l
m MEN Nm
e e d
P e e
φπ≤
− ⋅
≤∫
r s ( )| ]2j
ml ed
φ θπ
φ− −
∫
- = je φ ⋅ mlr s
1 arg max
2m Me e
π≤ ≤
0 0| |co
0
1 s(2 2)
2= arg maxl
mml
EN Nm
d
P e e dπ φ θ
φ
φ⋅− −
∫
∫r s
| |: phase of
jl ml l ml
l ml
e θ
θ⋅ = ⋅
⋅
r s r sr s
1 0 arg max
2m Me e dφ
π≤ ≤∫
0 ( ) is modified Bessel function of the 1st kind and order zero
I x
02 | |arg maxmE
N l mlP e r sI−
=⎛ ⎞⋅⎜ ⎟
105
0
2 cos
0( 1)
2x I e x d
π φ φπ
= ∫0
10
02arg max m
m MP e I
N≤ ≤= ⎜ ⎟
⎝ ⎠
ChaperChaper 4.54.5--1 1 NoncoherentNoncoherent Detection of Carrier Detection of Carrier Modulated SignalsModulated SignalsModulated Signals Modulated Signals
◊ If signals are equiprobable and have equal energy,
01 0
| |ˆ arg max 2l ml
m Mm I
N≤ ≤
⎛ ⎞⋅= ⎜ ⎟
⎝ ⎠
r sI0(x) is increasing
1 02m M N≤ ≤ ⎝ ⎠
1arg max | |l ml
m M≤ ≤= ⋅r s
∞
∫ E l D t t*
1arg max | ( ) ( ) |
ll mm M
r t s t dt∞
−∞≤ ≤= ∫ Envelop Detector
106
4.6 4.6 Comparison of Digital Modulation MethodsComparison of Digital Modulation Methods
◊ One can compare the digital modulation methods on the basis of
p gp g
the SNR required to achieve a specified probability of error.◊ However, such a comparison would not be very meaningful,
unless it were made on the basis of some constraint, such as a fixed data rate of transmission or, on the basis of a fixed bandwidthbandwidth.
◊ For multiphase signals, the channel bandwidth required is simply the bandwidth of the equivalent low-pass signal pulsesimply the bandwidth of the equivalent low pass signal pulse g(t) with duration T and bandwidth W, which is approximately equal to the reciprocal of T.
R◊ Since T=k/R=(log2M)/R, it follows that
MRW2log
=
107
4.6 4.6 Comparison of Digital Modulation MethodsComparison of Digital Modulation Methods
◊ As M is increased, the channel bandwidth required, when the bit
p gp g
, q ,rate R is fixed, decreases. The bandwidth efficiency is measured by the bit rate to bandwidth ratio, which is
R
◊ The bandwidth-efficient method for transmitting PAM is single-2logR M
W=
◊ The bandwidth-efficient method for transmitting PAM is single-sideband. The channel bandwidth required to transmit the signal is approximately equal to 1/2T and, pp y q
thi i f t f 2 b tt th PSK
22 logR MW
=
this is a factor of 2 better than PSK. ◊ For QAM, we have two orthogonal carriers, with each carrier
having a PAM signal
108
having a PAM signal.
4.6 4.6 Comparison of Digital Modulation MethodsComparison of Digital Modulation Methods
◊ Thus, we double the rate relative to PAM. However, the QAM signal must be transmitted via double sideband Consequently
p gp g
signal must be transmitted via double-sideband. Consequently, QAM and PAM have the same bandwidth efficiency when the bandwidth is referenced to the band-pass signal.bandwidth is referenced to the band pass signal.
◊ As for orthogonal signals, if the M = 2k orthogonal signals are constructed by means of orthogonal carriers with minimum y gfrequency separation of 1/2T, the bandwidth required for transmission of k = log2M information bits is
( ) RM
MRk
MT
MW2log222
===
In the case, the bandwidth increases as M increases.◊ In the case of biorthogonal signals, the required bandwidth is
109
g g qone-half of that for orthogonal signals.
4.6 4.6 Comparison of Digital Modulation MethodsComparison of Digital Modulation Methods
A compact and meaningful
p gp g
A compact and meaningful comparison of modulation methods is one based on the normalized data rate R/W (bits per second per hertz of bandwidth)
th SNR bit ( /N )versus the SNR per bit (εb/N0 ) required to achieve a given error probabilityprobability.In the case of PAM, QAM, and PSK, increasing M results in aPSK, increasing M results in a higher bit-to-bandwidth ratio R/W.
110
4.6 4.6 Comparison of Digital Modulation MethodsComparison of Digital Modulation Methods
◊ However, the cost of achieving the higher data rate is an
p gp g
increase in the SNR per bit.◊ Consequently, these modulation methods are appropriate for
i i h l h b d id h li i d hcommunication channels that are bandwidth limited, where we desire a R/W >1 and where there is sufficiently high SNR to support increases in Msupport increases in M.◊ Telephone channels and digital microwave ratio channels are
examples of such band-limited channels.p◊ In contrast, M-ary orthogonal signals yield a R/W ≤ 1. As M
increases, R/W decreases due to an increase in the required channel bandwidth.
◊ The SNR per bit required to achieve a given error probability
111
decreases as M increases.
4.6 4.6 Comparison of Digital Modulation MethodsComparison of Digital Modulation Methods
◊ Consequently, M-ary orthogonal signals are appropriate for
p gp g
◊ Consequently, M ary orthogonal signals are appropriate for power-limited channels that have sufficiently large bandwidth to accommodate a large number of signals.A h b bili b d ll d i d◊ As M→∞, the error probability can be made as small as desired, provided that SNR>0.693 (-1.6dB). This is the minimum SNR per bit required to achieve reliable transmission in the limit asper bit required to achieve reliable transmission in the limit as the channel bandwidth W→∞ and the corresponding R/W→0.
◊ The figure above also shown the normalized capacity of the b d li i d h l hi h i d h ( )band-limited AWGN channel, which is due to Shannon (1948).
◊ The ratio C/W, where C (=R) is the capacity in bits/s, represents the highest achievable bit rate to bandwidth ratio on thisthe highest achievable bit rate-to-bandwidth ratio on this channel.
◊ Hence, it serves the upper bound on the bandwidth efficiency of
112
, pp yany type of modulation.