[ieee 2011 ieee international conference on signal and image processing applications (icsipa) -...
TRANSCRIPT
The 𝜖−Normalized Sign Regressor Least MeanSquare (NSRLMS) Adaptive Algorithm
Mohammed Mujahid Ulla Faiz and Azzedine Zerguine
Electrical Engineering DepartmentKing Fahd University of Petroleum and Minerals
Dhahran 31261, Saudi Arabia.{mujahid, azzedine}@kfupm.edu.sa
Abstract—In this paper, expressions are derived for the steady-state and tracking excess-mean-square error (EMSE) of the𝜖−normalized sign regressor least mean square (NSRLMS) adap-tive algorithm. Finally, it is shown that simulations performedfor both the cases of white and correlated Gaussian regressorssubstantiate very well the theory developed.
I. INTRODUCTION
The sign based variants of the least mean square (LMS)algorithm [1] were introduced due to the simplicity of theirimplementation. The sign regressor algorithm (SRA) is onesuch variant of the LMS algorithm, which is based on clippingof the input data [2]. However, these sign based algorithmsresult in a performance loss when compared with the LMSalgorithm [3].
In [4], it is shown that the normalized least mean square(NLMS) algorithm converges faster than the LMS algorithm.A sign version of the NLMS algorithm, the normalized signregressor least mean square (NSRLMS) algorithm or simplythe normalized sign regressor algorithm (NSRA) as it is morecommonly known combines the advantages of the NLMSand SRA algorithms. Theoretical studies of the NSRLMSalgorithm can be found in [5]–[6]. In [7], the NSRLMS wastested in an adaptive noise cancellation scenario in order toremove noise from the electrocardiogram (ECG) signal. In ourwork, expressions are evaluated for the steady-state excess-mean-square error (EMSE) of the 𝜖−NSRLMS algorithm ina stationary environment. Also, expressions for the trackingEMSE in a nonstationary environment are presented. Theframework used in our analysis relies on energy conservationarguments [8]. From the simulation results it is shown that thetheoretical and simulated results are in very good agreement.
The organization of the paper is as follows. In Section II, the𝜖−NSRLMS algorithm is described. The mean-square analysisand the tracking analysis of the 𝜖−NSRLMS algorithm is pre-sented in Sections III and IV, respectively. Finally, simulationresults are discussed in Section V and Section VI concludesthe paper.
II. THE 𝜖−NSRLMS ALGORITHM
Consider a zero-mean random variable 𝑑 with realizations{𝑑(0), 𝑑(1), . . .}, and a zero-mean random row vector u withrealizations {u0,u1, . . .}. The optimal weight vector w𝑜 that
solves:minw
E∣𝑑− uw∣2, (1)
can be approximated iteratively via the recursion (the𝜖−NSRLMS algorithm)
w𝑖 = w𝑖−1 +𝜇
𝜖+ ∣∣u𝑖∣∣2Hcsgn[u𝑖]
∗𝑒𝑖, 𝑖 ≥ 0, (2)
where w𝑖 (column vector) is the updated weight vector attime 𝑖, 𝜇 is the step-size, 𝜖 is a small positive constant toavoid division by zero when the regressor is zero, H[.] is somepositive-definite Hermitian matrix-valued function of u𝑖, and𝑒𝑖 denotes the estimation error signal given by
𝑒𝑖 = 𝑑𝑖 − u𝑖w𝑖−1. (3)
III. MEAN-SQUARE ANALYSIS OF THE 𝜖−NSRLMSALGORITHM
We shall assume that the data {𝑑𝑖,u𝑖} satisfy the followingconditions of the stationary data model [8]:
A.1 There exists an optimal weight vector w𝑜 such that𝑑𝑖 = u𝑖w
𝑜 + 𝑣𝑖.A.2 The noise sequence 𝑣𝑖 is independent and identi-
cally distributed (i.i.d.) circular with variance 𝜎2𝑣 =
E[∣𝑣𝑖∣2] and is independent of u𝑗 for all 𝑖, 𝑗.A.3 The initial condition w−1 is independent of the zero
mean random variables {𝑑𝑖,u𝑖, 𝑣𝑖}.A.4 The regressor covariance matrix is R = E [u∗
𝑖u𝑖] >0.
For the adaptive filter of the form in (2), and for anydata {𝑑𝑖,u𝑖}, assuming filter operation in steady-state, thefollowing variance relation holds [8]:
𝜇E[∣∣u𝑖∣∣2H∣g[𝑒𝑖]∣2
]= 2Re
[E[𝑒∗𝑎𝑖
g[𝑒𝑖]]], as 𝑖 → ∞, (4)
whereE[∣∣u𝑖∣∣2H] = E[Re[u𝑖H[u𝑖]u
∗𝑖 ]], (5)
𝑒𝑖 = 𝑒𝑎𝑖+ 𝑣𝑖, (6)
with g[.] denoting some function of 𝑒𝑖, and 𝑒𝑎𝑖= u𝑖(w
𝑜 −w𝑖−1) is the a priori estimation error. Then g[𝑒𝑖] for the𝜖−NSRLMS algorithm becomes
g[𝑒𝑖] =𝑒𝑎𝑖
+ 𝑣𝑖𝜖+ ∣∣u𝑖∣∣2H
. (7)
2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)
978-1-4577-0242-6/11/$26.00 ©2011 IEEE 556
By using the fact that 𝑒𝑎𝑖and 𝑣𝑖 are independent, we reach at
the following expression for the term E[𝑒∗𝑎𝑖
g[𝑒𝑖]]:
E[𝑒∗𝑎𝑖
g[𝑒𝑖]]= E
[ ∣𝑒𝑎𝑖∣2
𝜖+ ∣∣u𝑖∣∣2H
]. (8)
To evaluate the term E[∣∣u𝑖∣∣2H∣g[𝑒𝑖]∣2
], we start by noting that
∣g[𝑒𝑖]∣2 =1
(𝜖+ ∣∣u𝑖∣∣2H)2[∣𝑒𝑎𝑖
∣2 + ∣𝑣𝑖∣2 + 𝑒∗𝑎𝑖𝑣𝑖 + 𝑒𝑎𝑖
𝑣∗𝑖]. (9)
If we multiply ∣g[𝑒𝑖]∣2 by ∣∣u𝑖∣∣2H from the left, and use thefact that 𝑣𝑖 is independent of both u𝑖 and 𝑒𝑎𝑖
, we obtain
E[∣∣u𝑖∣∣2H∣g[𝑒𝑖]∣2
]= E
[ ∣∣u𝑖∣∣2H∣𝑒𝑎𝑖∣2
(𝜖+ ∣∣u𝑖∣∣2H)2]+ 𝜎2
𝑣
×E
[ ∣∣u𝑖∣∣2H(𝜖+ ∣∣u𝑖∣∣2H)2
]. (10)
Substituting (8) and (10) into (4) we get
2Re
[E
[ ∣𝑒𝑎𝑖∣2
𝜖+ ∣∣u𝑖∣∣2H
]]= 𝜇E
[ ∣∣u𝑖∣∣2H∣𝑒𝑎𝑖∣2
(𝜖+ ∣∣u𝑖∣∣2H)2]+ 𝜇𝜎2
𝑣
×E
[ ∣∣u𝑖∣∣2H(𝜖+ ∣∣u𝑖∣∣2H)2
]. (11)
By using the assumption 𝜖 ≈ 0 in (11), we obtain
2Re
[E
[ ∣𝑒𝑎𝑖∣2
∣∣u𝑖∣∣2H
]]= 𝜇E
[ ∣𝑒𝑎𝑖∣2
∣∣u𝑖∣∣2H
]+ 𝜇𝜎2
𝑣E
[1
∣∣u𝑖∣∣2H
]. (12)
Now, let us use the following steady-state approximation:
E
[ ∣𝑒𝑎𝑖∣2
∣∣u𝑖∣∣2H
]≈ E[∣𝑒𝑎𝑖
∣2]E[∣∣u𝑖∣∣2H]
. (13)
From Price’s theorem [9] we have
E [Re[𝑥∗csgn(𝑦)]] =
√2
𝜋
√2
𝜎𝑦E [Re[𝑥∗𝑦]] , (14)
where 𝑥 and 𝑦 denote two complex-valued jointly-Gaussianrandom variables. Therefore,
E[∣∣u𝑖∣∣2H] = E[Re[u𝑖H[u𝑖]u∗𝑖 ]],
= E [Re[u𝑖csgn[u𝑖]∗]] ,
=4Tr(R)√
𝜋𝜎2𝑢
. (15)
Substituting (13) and (15) into (12) we get√
𝜋𝜎2𝑢E[∣𝑒𝑎𝑖
∣2]2Tr(R)
=𝜇√𝜋𝜎2
𝑢E[∣𝑒𝑎𝑖∣2]
4Tr(R)+ 𝜇𝜎2
𝑣E
[1
∣∣u𝑖∣∣2H
]. (16)
Therefore, the steady-state EMSE 𝜁 = E[∣𝑒𝑎𝑖∣2] of the
𝜖−NSRLMS algorithm can be shown to be
𝜁 =4𝜇𝜎2
𝑣Tr(R)
(2− 𝜇)√
𝜋𝜎2𝑢
E
[1
∣∣u𝑖∣∣2H
]. (17)
IV. TRACKING ANALYSIS OF THE 𝜖−NSRLMSALGORITHM
Here, we assume that the data {𝑑𝑖,u𝑖} satisfy the followingconditions of the nonstationary data model [8]:
A.5 There exists a vector w𝑜𝑖 such that 𝑑𝑖 = u𝑖w
𝑜𝑖 + 𝑣𝑖.
A.6 The weight vector varies according to the random-walk model w𝑜
𝑖 = w𝑜𝑖−1 + q𝑖, and the sequence q𝑖
is i.i.d. with covariance matrix Q. Moreover, q𝑖 isindependent of {𝑣𝑗 ,u𝑗} for all 𝑖, 𝑗.
A.7 The initial conditions {w−1,w𝑜−1} are independent
of the zero mean random variables {𝑑𝑖,u𝑖, 𝑣𝑖,q𝑖}.In this case, the following variance relation holds [8]:
𝜇E[∣∣u𝑖∣∣2H∣g[𝑒𝑖]∣2
]+ 𝜇−1Tr(Q) = 2Re
[E[𝑒∗𝑎𝑖
g[𝑒𝑖]]],
as 𝑖 → ∞. (18)
Tracking results can be obtained by inspection from the mean-square results as there are only minor differences. Therefore,by substituting (8) and (10) into (18) we get
𝜇𝜎2𝑣E
[1
∣∣u𝑖∣∣2H
]+ 𝜇−1Tr(Q) =
(2− 𝜇)√
𝜋𝜎2𝑢E[∣𝑒𝑎𝑖
∣2]4Tr(R)
. (19)
Therefore, the tracking EMSE 𝜁 of the 𝜖−NSRLMS algorithmis given by
𝜁 =4Tr(R)
(2− 𝜇)√
𝜋𝜎2𝑢
[𝜇𝜎2
𝑣E
[1
∣∣u𝑖∣∣2H
]+ 𝜇−1Tr(Q)
]. (20)
Moreover, in both cases, the expression for the mean-squareerror (MSE) of the 𝜖−NSRLMS algorithm is obtained by
E[∣𝑒𝑖∣2] = 𝜁 + 𝜎2
𝑣 . (21)
V. SIMULATION RESULTS
First, in order to validate the theoretical findings extensivesimulations are carried out for different scenarios. Figures1-2 are for the case of the steady-state MSE of a 10-tap𝜖−NSRLMS filter in a stationary environment and Figures 3-4 are for the case of the tracking MSE in a nonstationaryenvironment. In all of these figures the MSE is plotted as afunction of the step-size 𝜇 for a signal to noise ratio (SNR)of 30 dB and the value of 𝜖 is set to 10−6. Moreover, all thesimulations reported in this work use complex-valued signals.
In the case of Figures 1 and 3, the regressors, with shiftstructure, are generated by feeding a unit-variance whiteprocess into a tapped delay line. However, in Figures 2 and 4,the regressors, with shift structure, are generated by passingcorrelated data into a tapped delay line. Here, the correlateddata is obtained by passing a unit-variance i.i.d. Gaussiandata through a first-order auto-regressive model with transferfunction
√1−𝑎2
(1−𝑎𝑧−1) and 𝑎 = 0.8. As can be seen from Figures1-2, the simulation results match very well the theoreticalresult (17), which is the steady-state EMSE of the 𝜖−NSRLMSalgorithm.
Finally, to further validate the theoretical results in atracking scenario, Figures 3-4 depict this behavior. Here, therandom-walk channel behaves according to
w𝑜𝑖 = w𝑜
𝑖−1 + q𝑖, (22)
2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)
978-1-4577-0242-6/11/$26.00 ©2011 IEEE 557
where q𝑖 is a Gaussian sequence with zero mean and variance𝜎2𝑞 = 10−9. As observed from Figures 3-4, the simulation
results corroborate closely the theoretical result (20), which isthe tracking EMSE of the 𝜖−NSRLMS algorithm.
VI. CONCLUSIONS
The mean-square analysis and the tracking analysis of the𝜖−NSRLMS algorithm is carried out. Moreover, simulationsperformed are found to closely corroborate with the analyticalresults.
REFERENCES
[1] B. Widrow and S. D. Stearns, “Adaptive Signal Processing,” Prentice-Hall, Englewood Cliffs, NJ, USA, 1985.
[2] E. Eweda, “Analysis and design of a signed regressor LMS algorithm forstationary and nonstationary adaptive filtering with correlated Gaussiandata,” IEEE Trans. Circuits Syst., vol. 37, no. 11, pp. 1367–1374, Nov.1990.
[3] T. A. C. M. Claasen and W. F. G. Mecklenbrauker, “Comparison of theconvergence of two algorithms for adaptive FIR digital filters,” IEEETrans. Acoust., Speech, Signal Processing, vol. 29, no. 3, pp. 670–678,June 1981.
[4] M. Tarrab and A. Feuer, “Convergence and performance analysis of thenormalized LMS algorithm with uncorrelated Gaussian data” IEEE Trans.Inform. Theory, vol. 34, no. 4, pp. 680–691, July 1988.
[5] S. Koike, “Analysis of adaptive filters using normalized signed regressorLMS algorithm,” IEEE Trans. Signal Processing, vol. 47, no. 10, pp.2710–2723, Oct. 1999.
[6] M. H. Costa and J. C. M. Bermudez, “A fully analytical recursivestochastic model to the normalized signed regressor LMS algorithm,”in Proc. the Seventh Int. Symp. Signal Processing and its Applications,vol. 2, pp. 587–590, July 2003.
[7] M. Z. Ur Rahman, R. A. Shaik, and D. V. R. K. Reddy, “An efficientnoise cancellation technique to remove noise from the ECG signal usingnormalized signed regressor LMS algorithm,” in Proc. IEEE Int. Conf.Bioinformatics and Biomedicine, pp. 257–260, Nov. 2009.
[8] A. H. Sayed, “Fundamentals of Adaptive Filtering,” Wiley Interscience,New York, NY, USA, 2003.
[9] R. Price, “A useful theorem for nonlinear devices having Gaussian inputs,”IRE Trans. Inform. Theory, vol. 4, no. 2, pp. 69–72, June 1958.
10−3
10−2
−30
−29.8
−29.6
−29.4
−29.2
−29
Step−size (μ)
MS
E (
dB
)
SimulationTheory
Fig. 1. Theoretical and simulated steady-state MSE of the 𝜖−NSRLMSalgorithm using white Gaussian regressors.
10−3
10−2
−30
−29.8
−29.6
−29.4
−29.2
−29
Step−size (μ)
MS
E (
dB
)
SimulationTheory
Fig. 2. Theoretical and simulated steady-state MSE of the 𝜖−NSRLMSalgorithm using correlated Gaussian regressors.
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09−30
−29
−28
−27
Step−size (μ)
MS
E (
dB
)
SimulationTheory
Fig. 3. Theoretical and simulated tracking MSE of the 𝜖−NSRLMSalgorithm using white Gaussian regressors.
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09−30
−29
−28
−27
Step−size (μ)
MS
E (
dB
)
SimulationTheory
Fig. 4. Theoretical and simulated tracking MSE of the 𝜖−NSRLMSalgorithm using correlated Gaussian regressors.
2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA2011)
978-1-4577-0242-6/11/$26.00 ©2011 IEEE 558