lse recu 100

Upload: makroum

Post on 01-Mar-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/25/2019 Lse Recu 100

    1/8

    Iterative and recursive least squares estimation algorithms

    for moving average systems

    Yuanbiao Hu

    School of Engineering and Technology, China University of Geosciences, Beijing 100083, PR China

    a r t i c l e i n f o

    Article history:

    Received 25 September 2008

    Received in revised form 17 December 2012

    Accepted 30 December 2012

    Available online 24 February 2013

    Keywords:

    Iterative identification

    Recursive identification

    Parameter estimation

    Stochastic gradient

    Least squares

    a b s t r a c t

    An iterative least squares algorithm and a recursive least squares algorithms are developed

    for estimating the parameters of moving average systems. The key is use the least squares

    principle and to replace the unmeasurable noise terms in the information vector. The steps

    and flowcharts of computing the parameter estimates are given. The simulation results val-

    idate that the proposed algorithms can work well.

    2013 Elsevier B.V. All rights reserved.

    1. Problem formulation

    The least squares methods are effective in modeling physical systems, including the synchronous generator modeling and

    parameter estimation [15]. In general, a system can be modeled by an autoregressive (AR) model, an moving average (MA)

    model [6], an autoregressive moving average (ARMA) model [7,8], an impulse response model [9], a Hammerstein nonlinear

    model [10,11], or a Wiener nonlinear model [12]. The Monte-Carlo simulation tests are used to validate automatic new topic

    identification of search engine transaction logs [13]. Recently, Ding and Chen proposed a multi-innovation stochastic gradi-

    ent identification method for linear regression models [14,15] and some related work can be found in [16,17].

    The time series models includes three basic models: the autoregressive model, the moving average model and autoregres-

    sive moving average model. Their expansions are the controlled autoregressive models, the (multiple-input) output error

    models [18,19], and the multivariable ARX-like models [20]. These models play an important role in signal processing

    [21] and system identification [22,23]. Many parameter identification, adaptive filtering and prediction methods have been

    reported for the (controlled) autoregressive models and the (controlled) autoregressive moving average models [2426]. Justas Ding et al. pointed out in [27] that some contributions assume that the moving average processes and the autoregressive

    moving average processes under consideration are stationary and ergodic and the correlation analysis based methods are not

    suitable for identifying the non-stationary moving average systems and the autoregressive moving average systems, e.g.,

    [6,8,26]. To this point, Ding, Shi and Chen analyzed the convergence properties of the least squares algorithm and the sto-

    chastic gradient algorithm for autoregressive moving average processes [28]. Recently, Wang proposed a least squares based

    recursive algorithm and a least squares based iterative algorithm for output error moving average systems using the data

    filtering [29]. This paper studies the identification problems of the non-stationary and non-ergodic moving average systems.

    1569-190X/$ - see front matter 2013 Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.simpat.2012.12.009

    Tel.: +8610 82321887.

    E-mail address: [email protected]

    Simulation Modelling Practice and Theory 34 (2013) 1219

    Contents lists available at SciVerse ScienceDirect

    Simulat ion Modelling Practice and Theory

    j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a te / s i m p a t

    http://dx.doi.org/10.1016/j.simpat.2012.12.009mailto:[email protected]://dx.doi.org/10.1016/j.simpat.2012.12.009http://www.sciencedirect.com/science/journal/1569190Xhttp://www.sciencedirect.com/science/journal/1569190Xhttp://www.sciencedirect.com/science/journal/1569190Xhttp://www.elsevier.com/locate/simpathttp://www.elsevier.com/locate/simpathttp://www.sciencedirect.com/science/journal/1569190Xhttp://dx.doi.org/10.1016/j.simpat.2012.12.009mailto:[email protected]://dx.doi.org/10.1016/j.simpat.2012.12.009http://crossmark.dyndns.org/dialog/?doi=10.1016/j.simpat.2012.12.009&domain=pdf
  • 7/25/2019 Lse Recu 100

    2/8

    This paper is organized as follows. Section 2 derives the identification model of moving average systems. Section 3 pre-

    sents a recursive least squares algorithm for moving average models. Section 4 derives a least squares based iterative algo-

    rithm for identifying moving average models. Section 5 provides two examples to show the effectiveness of the proposed

    algorithms. Finally, we give some conclusions in Section 6.

    2. The identification model

    Consider the following moving average process:yt Czwt; 1

    where y(t) is the system observation data, w(t) is a stochastic white noise with zero mean, and C(z) is a polynomial in the

    shift operatorz1 [z1w(t) = w(t 1)] with

    Cz 1 c1z1 c2z

    2 cnzn:

    The coefficients cis are the parameters to be estimated from observation data {y(t): t= 1, 2, 3, . . .}.

    It is well-known that some identification approaches can estimate the parameters of the moving average processes, e.g.,

    the correlation-analysis based algorithms in [8,26] and the recursive least squares algorithms in [28], assuming that w(t) is

    independent and stationary and ergodic, and satisfies E[w(t)] = 0, E[w(t)w(t+j)] = 0,j 0, E[v2(t)] = r2 (constant), where the

    symbol E represents the expectation operator.

    It has been pointed out in [28] that if the variance of the noise w(t) is time-varying, then Eq. (1) is a non-stationary and

    non-ergodic moving average process, and the correlation-analysis approaches are not suitable for identifying such non-sta-tionary autoregressive moving average processes. This motivates us to study new identification algorithms for non-station-

    ary moving average processes. This paper proposes an iterative parameter estimation algorithm and a recursive parameter

    estimation algorithm to identify the parameters ci of the non-stationary and non-ergodic moving average processes from

    available observation data {y(t)}.

    Let the superscript T denote the matrix transpose. Define the parameter vector # and the information vector h(t) as

    # :

    c1

    c2

    .

    .

    .

    cn

    266664

    377775 2 R

    n; ht :

    wt 1

    wt 2

    .

    .

    .

    wtn

    266664

    377775 2 R

    n:

    Then Eq. (1) can be written as

    yt 1 c1z1 c2z2 cnznwt wt c1z1wt c2z2wt cnznwt

    wt c1wt 1 c2wt 1 cnwtn wt wt 1;wt 1; ;wtn

    c1

    c2

    .

    .

    .

    cn

    26666664

    37777775

    hT

    t#wt: 2

    This is the identification model of the moving average process in (1).

    Define the stacked output vector Y(t) and stacked noise vector W(t) as

    Yt :

    yt

    yt 1

    .

    .

    .

    ytp 1

    266664377775 2 R

    p; Wt :

    wt

    wt 1

    .

    .

    .

    wtp 1

    266664377775 2 R

    p; tPp;

    Ht : Wt 1;Wt 2; ;Wtn

    hT

    t

    hT

    t 1

    .

    .

    .

    hT

    tp 1

    2666664

    3777775 2 R

    pn:

    Thus, from (2), we have

    Yt Ht# Wt: 3

    Y. Hu / Simulation Modelling Practice and Theory 34 (2013) 1219 13

    http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-http://-/?-
  • 7/25/2019 Lse Recu 100

    3/8

    3. The recursive least squares algorithm

    Let #t denote the estimate of # at time t. For the identification model in (2), defining and minimizing the criterion

    function,

    J1# :

    1

    2

    Xtj1

    yj hT

    j#2;

    we can obtain the following recursive least squares (RLS) algorithm for estimating # [30,31]:

    #t #t 1 Pthtyt hTt#t 1; 4

    Pt Pt 1 Pt 1hthTtPt 1

    1 hTtPt 1ht; P0 p0I; 5

    wt yt hTt#t; 6

    ht wt 1; wt 2; ; wtnT; 7

    where Irepresents an identity matrix of appropriate sizes, andp0 is a large positive number, e.g.,p0 106; #0 1n=p0 with

    1n being an n-dimensional column vector whose elements are 1.

    LetkXk2 : tr[XXT] denote the norm of the matrixX. For the identification model in (3), defining and minimizing the cri-terion function,

    J2# :

    1

    2

    Xtj1

    kYj Hj#k2;

    we can obtain a new recursive least squares (RLS) algorithm for estimating # [27,32]:

    #t #t 1 PtbHTtYt bHt#t 1; 8

    Fig. 1. The flowchart of computing the parameter estimate #t.

    14 Y. Hu /Simulation Modelling Practice and Theory 34 (2013) 1219

  • 7/25/2019 Lse Recu 100

    4/8

    Pt Pt 1 Pt 1bHTtIbHtPt 1bH Tt1bHtPt 1; P0 p0I; 9cWti Yti bHti#t; i 0;1;2;. . .;n 1; 10bHt cWt 1;cWt 2; ;cWtn: 11

    The following lists the steps of computing b#L in the RLS algorithm with the data length L [31].

    1. Collect the observation data {y(t): t= 1, 2, . . . , L} (L is the data length).

    2. To initialize, let t 1; #0 1n=p0;P0 p0I and cWt i 1=p0 i 1; 2;. . .; n 1 withp0 = 10

    6.

    3. Form bHt by (11) and compute P(t) by (9).

    4. Update the parameter estimation vector #t by (8).

    5. Compute cWt i i 1; 2;. . .; n 1 by (10).

    6. If t< L, increase tby 1 and go to step 3; otherwise, obtain the parameter estimation vector #L.

    The flowchart of computing the parameter estimate #t is shown in Fig. 1 [31].

    4. The least squares based iterative algorithm

    Introduce a quadratic criterion function,

    J3# :1

    2

    Xp1i0

    yti hT

    ti#2

    1

    2kYt Ht#k

    2; p n:

    Minimizing J3(#) and letting the partial derivative ofJ3(#) with respect to # be zero give

    @J3#

    @#

    Xp1i0

    htiyti hT

    ti# HTtYt Ht# 0:

    Provided that h(t) is persistently exciting, we can obtain the least-squares estimate:

    #t HTtHt1H

    TtYt: 12

    Because W(t i) in H(t) is unmeasurable, we cannot obtain the estimate #t through (12). Like in [33,34], we use the hier-

    archical identification principle: Let k = 1, 2, 3, . . . be an iterative variable and #kt be the estimate of # at iteration k with the

    data length t, the unknown variables W(t i) are replaced with their estimated valuescWk1t i. Details are as follows.

    Define the matrix

    bHkt : cWk1t 1;cWk1t 2;. . .;cWk1tnh i

    2 Rpn: 13

    From (3), we haveW(t i) = Y(t i) H(t i)#. Replacing the unknownH(t i) and #with bHkt i and #kt; cWkt i can

    be computed by

    cWkti Yti bHkti#kt; i 0;1;. . .;n 1: 14Replacing H(t) in (12) with bHkt in (13) results in the following iterative algorithm of computing the solution #kt of# [35

    37]:

    #kt

    bH

    T

    kt

    bHkt

    h i1bH

    T

    ktYt; k 1;2;3;. . . 15

    bHkt cWk1t 1;cWk1t 2; ;cWk1tnh i

    ; 16

    Yt yt;yt 1; ;ytp 1T; 17

    Wkti Yti bHkti#kt; i 0;1; ;n 1: 18Eqs. (15)(18) are referred to as the least-squares based iterative identification algorithm for moving average systems, the

    LSI algorithm for short.

    For finite observation data {y(t): t= 1, 2, . . . , L} (L represents the data length). Taking t=p = L in (15) to (18) leads to the

    following LSI algorithm for the moving average systems [38,39]:

    #k bHT

    kbHkh i1

    bHT

    kY; k 1;2;3;. . . 19

    Y. Hu / Simulation Modelling Practice and Theory 34 (2013) 1219 15

  • 7/25/2019 Lse Recu 100

    5/8

    bHk cWk1L 1;cWk1L 2; ;cWk1Ln; 20Y yL;yL 1; ;y1

    T; 21

    cWkt Yt

    bHkt#k; t Ln; Ln 1;. . .;L: 22

    Here, #k denotes the parameter estimation vector of # with the data length L.

    The steps of computing #k in the LSI algorithm are listed in the following [31].

    1. Collect the observation data {y(t): t= 1, 2, . . . , L} (L n), form Yby (21) and give a small positive number e.

    2. To initialize, let k 1; cW0t a random vector of dimension L.

    3. Form bHk by (20).

    4. Compute the estimate #k by (19).

    5. Compute cWkt by (22).

    6. If k#k #k1k 6 e, then terminate the procedure and obtain the iterative time k and the estimate #k; otherwise, increase k

    by 1 and go to step 3.

    The flowchart of computing the parameter estimate #k is shown in Fig. 2 [31,35].

    5. Examples and discussions

    Example 1. Consider a third-order moving average process,

    yt wt c1wt 1 c2wt 2 c3wt 3 wt 0:80wt 1 0:60wt 2 0:30wt 3;

    # c1; c2; c3T

    0:80; 0:60; 0:30T;

    #t c1t; c2t; c3t T:

    In simulation, we use a white noise sequence with zero mean and variancer

    2

    = 1.00

    2

    as {w

    (t)} and the RLS algorithm in(4)(7) or the RLS algorithm in (8)(11) withp = 1 to estimate the parameters of this moving average model, the parameter

    Fig. 2. The flowchart of computing the estimate #k with k increasing.

    16 Y. Hu /Simulation Modelling Practice and Theory 34 (2013) 1219

  • 7/25/2019 Lse Recu 100

    6/8

    estimates and errors are shown in Table 1 and the estimation error d : k#t #k=k#k versus tis shown in Fig. 3. Fig. 4 plots

    the parameter estimates cit versus t.

    Example 2. Consider the following moving average process,

    yt wt c1wt 1 c2wt 2 c3wt 3 wt 0:80wt 1 0:50wt 2 0:20wt 3;

    Table 1

    The parameter estimates and errors of Example 1.

    t c1 c2 c3 d (%)

    100 0.73265 0.43594 0.13023 23.51543

    200 0.79830 0.53677 0.17552 13.37383

    500 0.79515 0.53507 0.20774 10.81594

    1000 0.85700 0.59208 0.25301 7.11602

    2000 0.83564 0.60372 0.25727 5.34144

    3000 0.82384 0.59959 0.26358 4.169464000 0.81872 0.60412 0.27797 2.79682

    5000 0.82260 0.59947 0.29081 2.33700

    6000 0.82415 0.60637 0.29891 2.39448

    7000 0.81769 0.60574 0.29958 1.78147

    8000 0.80599 0.60746 0.30126 0.92408

    9000 0.79844 0.60551 0.29866 0.56308

    10,000 0.80440 0.60655 0.30158 0.77095

    True values 0.80000 0.60000 0.30000

    Fig. 4. The parameter estimates ci versus tofExample 1.

    Fig. 3. The parameter estimation errors d versus tofExample 1.

    Y. Hu / Simulation Modelling Practice and Theory 34 (2013) 1219 17

  • 7/25/2019 Lse Recu 100

    7/8

    # c1; c2; c3T

    0:80; 0:50; 0:20T:

    The simulation conditions are similar to those of Example 1. We use the RLS algorithm in (4)(7) and the LSI algorithm in

    (19)(22) with the data length t=p = L = 1000, 2000, 3000, 4000 and 5000 to estimate the parameters of this example sys-

    tem, the parameter estimates and estimation errors d : k#t #k=k#k (RLS) andd : k#k #k=k#k (LSI, k = 7) are shown in

    Tables 2 and 3.

    From Tables 13 and Figs. 3, 4, the following conclusions can be drawn.

    1. The parameter estimates given by the RLS algorithm are close to their true values as the data length t increases see

    Tables 1 and 2.

    2. The parameter estimation errors given by the RLS algorithm (in general) become smaller or the estimation accuracies

    become higher as the data length tincreases see Figs. 3 and 4.

    3. As the data length increases, the parameter estimates given by the RLS algorithm can track the system parameters see

    Fig. 4.

    4. For the same data length L, the parameter estimation errors of the LSI algorithm are generally smaller than those of the

    RLS algorithm. This implies that the iterative algorithm is more accurate than the recursive algorithm see Tables 2 and

    3.

    6. Conclusion

    This paper studies iterative and recursive least squares algorithms for moving average models. Although the RLS and LSI

    algorithms are developed for the moving average models, the approaches can be extended to identify autoregress ive moving

    average models. The least-squares based iterative algorithm for the moving average models is quite interesting, but its

    parameter estimation error bounds require further studying.

    Acknowledgment

    This work was supported by the National Natural Science Foundation of China (No. 51204149).

    References

    [1] S.E. Chouaba, A. Chamroo, R. Ouvrard, T. Poinot, A counter flow water to oil heat exchanger: MISO quasi linear parameter varying modeling andidentification, Simulation Modelling Practice and Theory 23 (2012) 8798.

    [2] L. Ekonomou, S. Lazarou, G.E. Chatzarakis, V. Vita, Estimation of wind turbines optimal number and produced power in a wind farm using an artificialneural network model, Simulation Modelling Practice and Theory 21 (1) (2012) 2125.

    [3] S.H. Sad, M.F. Mimouni, F. MSahli, M. Farza, High gain observer based on-line rotor and stator resistances estimation for IMs, Simulation ModellingPractice and Theory 19 (7) (2011) 15181529.

    [4] V. Thanasis, B.S. Efthimia, K. Dimitris, Estimation of linear trend onset in time series, Simulation Modelling Practice and Theory 19 (5) (2011) 13841398.

    [5] F. Ding, Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling, Applied Mathematical Modelling 37(4) (2013) 16941704.

    Table 2

    The RLS estimates and err ors of Example 2.

    t c1 c2 c3 d (%)

    1000 0.86004 0.49889 0.16593 7.15936

    2000 0.83742 0.50912 0.16450 5.43140

    3000 0.82519 0.50385 0.16853 4.19942

    4000 0.81978 0.50760 0.18189 2.89037

    5000 0.82348 0.50240 0.19397 2.52632

    True values 0.80000 0.50000 0.20000

    Table 3

    The LSI estimates and err ors of Example 2.

    L c1 c2 c3 d (%)

    1000 0.84058 0.55022 0.27130 9.97463

    2000 0.79684 0.52360 0.22368 3.48215

    3000 0.78134 0.50527 0.20033 2.01077

    4000 0.79141 0.50473 0.20273 1.05553

    5000 0.78743 0.49636 0.21784 2.29376

    True values 0.80000 0.50000 0.20000

    18 Y. Hu /Simulation Modelling Practice and Theory 34 (2013) 1219

  • 7/25/2019 Lse Recu 100

    8/8

    [6] F. Desbouvries, I. Fijalkow, P. Loubaton, On the identification of noisy MA models, IEEE Transactions on Automatic Control 41 (12) (1996) 18101814.[7] S.S. Pappas, L. Ekonomou, P. Karampelas, S.K. Katsikas, P. Liatsis, Modeling of the grounding resistance variation using ARMA models, Simulation

    Modelling Practice and Theory 16 (5) (2008) 560570.[8] J. Franke, A LevinsonDurbin recursion for autoregressive-moving average processes, Biometrika 72 (3) (1985) 573581.[9] T. Tutunji, M. Molhim, E. Turki, Mechatronic systems identification using an impulse response recursive algorithm, Simulation Modelling Practice and

    Theory 15 (8) (2007) 970988.[10] F. Ding, T. Chen, Identification of Hammerstein nonlinear ARMAX systems, Automatica 41 (9) (2005). 1479-148.[11] F. Ding, X.P. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digital Signal Processing 21 (2) (2011) 215238.[12] D.Q. Wang, F. Ding, Least squares based and gradient based iterative identification for Wiener nonlinear systems, Signal Processing 91 (5) (2011) 1182

    1189.

    [13] S. Ozmutlu, H.C. Ozmutlu, B. Buyuk, A Monte-Carlo simulation application for automatic new topic identification of search engine transaction logs,Simulation Modelling Practice and Theory 16 (5) (2008) 519538.

    [14] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 114.[15] F. Ding, Several multi-innovation identification methods, Digital Signal Processing 20 (4) (2010) 10271039.[16] Y.J. Liu, L. Yu, et al, Multi-innovation extended stochastic gradient algorithm and its performance analysis, Circuits Systems and Signal Processing 29

    (4) (2010) 649667.[17] L. Xie, Y.J. Liu, et al, Modeling and identification for non-uniformly periodically sampled-data systems, IET Control Theory and Applications 4 (5) (2010)

    784794.[18] F. Ding, Y. Gu, Performance analysis of the auxiliary model based least squares identification algorithm for one-step state delay systems, International

    Journal of Computer Mathematics 89 (15) (2012) 20192028.[19] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model,

    Applied Mathematics and Computation 215 (4) (2009) 14771483.[20] Y.J. Liu, J. Sheng, R.F. Ding, Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems, Computers & Mathematics

    with Applications 59 (8) (2010) 26152627.[21] F. Ding, X.P. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises,

    Signal Processing 89 (10) (2009) 18831890.[22] X.G. Liu XG, J. Lu, Least squares based iterative identification for a class of multirate systems, Automatica 46 (3) (2010) 549554.

    [23] Y.S. Xiao, N. Yue, Parameter estimation for nonlinear dynamical adjustment models, Mathematical and Computer Modelling 54 (5-6) (2011) 15611568.

    [24] Y.S. Xiao, Y. Zhang, et al, The residual based interactive least squares algorithms and simulation studies, Computers & Mathematics with Applications58 (6) (2009) 11901197.

    [25] Y.S. Xiao, D.Q. Wang, et al, The residual based ESG algorithm and its performance analysis, Journal of the Franklin Institute-Engineering and AppliedMathematics 347 (2) (2010) 426437.

    [26] G.C. Goodwin, K.S. Sin, Adaptive Filtering Prediction and Control, Prentice-Hall, Englewood Cliffs, NJ, 1984.[27] F. Ding, Y. Shi, T. Chen,. Least squares identification of non-stationary MA systems, in: Proceedings of the 2005 American Control Conference

    (ACC2005), Portland, USA, June 810, 2005, pp. 47784783.[28] F. Ding, Y. Shi, T. Chen, Performance analysis of estimation algorithms of non-stationary ARMA processes, IEEE Transactions on Signal Processing 54 (3)

    (2006) 10411053.[29] D.Q. Wang, Least squares-based recursive and iterative estimation for output error moving average systems using data filtering, IET Control Theory and

    Applications 5 (14) (2011) 16481657.[30] L. Ljung, System Identification: Theory for the User, second ed., Prentice-Hall, Englewood Cliffs, NJ, 1999.[31] F. Ding, System Identification New Theory and Methods, Science Press, Beijing, 2013.[32] F. Ding, X.P. Liu, G. Liu, Multi-innovation least squares identification for linear and pseudo-linear regression models, IEEE Transactions on Systems,

    Man, and Cybernetics, Part B: Cybernetics 40 (3) (2010) 767778.

    [33] F. Ding, T. Chen, Hierarchical least squares identification methods for multivariable systems, IEEE Transactions on Automatic Control 50 (3) (2005)397402.

    [34] F. Ding, T. Chen, Hierarchical identification of lifted state-space models for general dual-rate systems, IEEE Transactions on Circuits and SystemsI:Regular Papers 52 (6) (2005) 11791187.

    [35] F. Ding, X.P. Liu, G. Liu, Gradient based and least-squares based iterative identification methods for OE and OEMA systems, Digital Signal Processing 20(3) (2010) 664677.

    [36] D.Q. Wang, G.W. Yang, R.F. Ding, Gradient-based iterative parameter estimation for BoxJenkins systems, Computers & Mathematics with Applications60 (5) (2010) 12001208.

    [37] Y.J. Liu, D.Q. Wang, et al, Least squares based iterative algorithms for identifying BoxJenkins models with finite measurement data, Digital SignalProcessing 20 (5) (2010) 14581467.

    [38] F. Ding, Y.J. Liu, B. Bao, Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems, Proceedings ofthe Institution of Mechanical Engineers. Part I: Journal of Systems and Control Engineering 226 (1) (2012) 4355.

    [39] F. Ding, Two-stage least squares based iterative estimation algorithm for CARARMA system modeling, Applied Mathematical Modelling 37 (7) (2013)47984808.

    Y. Hu / Simulation Modelling Practice and Theory 34 (2013) 1219 19