10.1.1.23.1461

Upload: leloiboi

Post on 04-Jun-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 10.1.1.23.1461

    1/8

    IEE E C ommunications Magazine July 199782

    Traff ic Models in Broadband Netw orks

    0163-6804/97/$10.00 1997 IE EE

    he need for te lecommunica-tion networks capable of pro-

    viding diverse and emergingcommunica t ion se rv ices such a sdat a, voice, and video, motivatedthe standardizat ion of broadbandnetworks. The need for a flexibledesign that ca n accommodate f utureservices and a dvances in technologyled International Consultative Com-mittee for Telephone and Telegraph (CCITT) to adopt theasynchronous transfer mode (ATM). The transfer mode is acollection of mechanisms that are used to implement switch-ing and multiplexing in the network.

    The success of ATM networks depends on the develop-ment o f ef fective congestion contro l schemes. These schemesare responsible for maintaining an a cceptable q uality of ser-vice (QoS) level that is deliverable by the network. The con-gestion control schemes will decide to accept or reject new connections based o n their tra ffic characteristics and availablenetwor k resources. The con gestion cont rol schemes will policethe existing connections to insure tha t they do no t exceedtheir negotiated traffic characteristic parameters.

    Performance modeling techniques are needed to determinewhich congestion control techniques should be used. P erfor-mance modeling techniques include: analytical techniques,computer simulation, and experimentation [1]. Perfo rmancemodels require accurate traffic models which can capture thestatistical characteristics of actual traffic. If the traffic models

    do not accurately represent actua l traffic, one ma y overesti-mate or underestimate network performance.This article surveys traffic models in telecommunication net-

    works. Traffic models can be stationary or nonstationary. Station-ary tra ffic models can be classified in general into t wo classes:short-range and long-range dependent. Short-range dependentmodels include M arkov processes and R egression models.These traffic models have a correlation structure that is signif-

    icant fo r relatively small lags. Long-range dependent traffic models such asFractional Autoregressive Integrat edMoving Average (F-ARIMA) andFractional Brownian motion have sig-nificant correlations even for large lags.

    Traffic models are analyzed basedon goo dness-of-fit, number of para m-eters needed to describe the model,parameter estimation, and analytical

    tractability. To evaluat e goodness-of-fit, one needs to define met-rics that determine how close the model is to the actua l data [2].These metrics have to be directly related to the performancemeasures that are needed to be predicted from the model. The

    goodness-of-fit used in this article is based on the ability of themodel to ca pture marginal distributions, auto -correlation struc-ture, and ultimately predict delays and cell loss probabilities.

    This article is organized as follows: The second and third sec-tions cover traditional tra ffic models, Markov, and R egressionmodels. The f ourth section discusses nontra ditional tra fficmodels. It briefly reviews long-range dependence and discusses threedifferent long-range dependent tra ffic models, fractiona l Brownianmotion, F-ARIMA, and aggregation of high-variability ON-OFF sources. Finally, the fifth section concludes the article.

    MARKOV AND EMBEDDED M ARKOV M ODELS

    I n many situations, the activities of a source can be modeledby a finite number o f stat es. Figure 1 shows a widely usedfinite state model in voice telephony. In this model, a voicesource is either idle o r busy. When it is busy, it will only tra ns-mit packets during speech activity. In general, increasing thenumber of s tates results in a more a ccurate model at t heexpense of increased computational complexity.

    For a given state space S = {s 1, s 2, ,S M } , let X n be a ran-dom variable which defines the state at time n . The set of randomvariables { X n } will form a discrete Markov chain, if the proba -bility of the next value X n + 1 = s j depends only on the currentstate. This is known as Markov property [3]. If state transitionsoccur at integer values (0, 1, , n , ), the Markov chain is dis-crete time. Otherwise, the M arkov chain will be continuous time.

    Markov property implies that the future depends on the

    Abd elnaser Adas

    Georgia Inst it ut e of Technolog y

    T

    ABSTRACTTraffic models are at the heart of any performance evaluation of telecommunications networks. An accurate estimation of network

    perfo rm ance is critical for th e success of b roadb and net w orks. Such netw orks need to guaran tee an acceptab le quality of service (QoS)level to the users. Therefore, traffic models need to be accurate and able to capture the statistical characteristics of the actual traffic. In

    this article we survey and examine traff ic mod els that are currently used in t he li terature. Traditional short-range and n on-tradition allong-range dependent traffic m odels are presented. Num ber of parameters needed, parameter estimation, analytical tractability, and

    abili ty of t raffic mod els to capture marginal distribution and aut o-correlation structure of actual t raffic are discussed.

    Figu re 1. Fi nite state model fo r voice.

    Thi s research was supported in part by the Nat ion al Science Foun dation under grant NC R-9396299. Thi s arti cle is based on G eorgia T ech techni- cal report GI T-C C-96-01.

  • 8/13/2019 10.1.1.23.1461

    2/8

    IEE E C ommunications Magazine July 1997 83

    cur ren t sta te and no t on prev ious sta tes nor o n the t imealready spent in the current state. This restricts the ra ndomvariable, which describes the time spent in a state to a geo-metric distribution in the discrete case and to an exponentialdistribution in the continuous case [3].

    A semi-Ma rkov process is obta ined by allowing the t imebetween state tra nsitions to f ollow an arbitrary probability dis-tr ibution. If the t ime distr ibution between transi t ions isignored, the sequence of states visited by the semi-Markovprocess will be a discrete time Markov chain, and is referredto a s an embedded Markov chain.

    In a simple Ma rkov traffic model, each state transition rep-resents a new arrival. Therefore, inter-arrival times are expo-nentially distributed (for continuous time case), and their ratesdepend on the state from which the transition occur [1]. Therest of this section discusses various Ma rkov and embeddedMarkov models that have been used to model network traffic.

    ON-OFF AND IPP M ODELSThe ON-OFF source model shown in Fig. 2a is the most pop-ular source model for voice [4, 5]. In this model, packets areonly generated during talk spurts (ON state) with fixed inter-arrival time. The time spent in ON a nd O FF sta tes is expo-nentially distributed with mean 1 and 1, respectively.

    The interrupted Poisson process (IPP) shown in Fig. 2b isalso a two-state process. Arrivals only occur in the active stateaccording to a Poisson distribution with ra te . Hence, IPPand O N-OF F models differ in interarrival time during theactive (ON) state.

    ALTERNATING STATE RENEWAL PROCESSThe alternating state renewal process is a two state process[6], s 1 and s 2, with no self transition. Therefore, the embeddedMarkov chain is alternating between s 1 and s 2. The tra fficamplitude is 0 while in state s 1 and 1 while in state s 2. Let themean sojourn time in s 1 and in s 2 to be d 1 and d 2, respectively.Then, the steady state probabilities for being in state s 1 is P s 1= d 1/( d 1 + d 2), and for s 2 is P s 2 = d 2/( d 1 + d 2).

    The superposition of identical independent a lternating

    state renewal processes has a binomial distribution [6].MARKOV M ODULATED POISSON PROCESS

    A Markov modulated process, also called doubly stochasticprocess, uses an auxiliary Markov process in which the currentstate of the Markov process controls (modulate) the probabili-ty distribution of the traffic.

    Markov modulated Poisson process (MMPP) uses Poissonprocess as the modulated mecha nism as shown in F ig. 3. In thismodel, while in state s k , the arrivals occur according to a Pois-son process with rate k . The introduction of MM PP processallows the modeling of time-varying sources while keeping theanalytical solution of related queuing performance tractable.

    The MMPP parameters can be estimated easily from theempirical data as follows: quantize the arrival rate into finite

    number of rates which corresponds to the number of states. Eachrate corresponds to a state in the M arkov chain. The transi-tion rate from state i to state j , denoted by q ij , is estimated byquantizing the empirical data and by calculating the fractionof times that the state (rate) i switched to state (rate) j . Notethat an MMPP process with M + 1 states can be obtained bythe superposition of M identical independent IPP sources.

    MMPP can model a mixture of voice and data traffic. In thiscase, the arrivals of voice packets while in state k is assumed tobe Poisson with rate k . Data packets are also Poisson with rate d .The resulting rate of state s k will be d + k . The performancemeasures such as q ueuing distribution a nd the moments of t hedelay distribut ion are obtained using MMP P/G /1 queue analysis [4].

    MARKOV M ODULATED FLUID M ODELS

    Fluid models characterize the traffic as a continuous stream witha pa ramet erized flow r at e (such as bits/sec). These models a reappropriate in the case where individual units of tra ffic (packetsor cells) have little impact on the performance of the network.

    Fluid models are conceptually simple and their simulationhas an important a dvantage over other models. Consider forexample, an event simulation for an ATM multiplexer. Allmodels that distinguish between cells and consider the a rrivalof ea ch cel l as a separa te event , consume vast amount o fmemory and CPU resources. In contrast, fluid models charac-terize the incoming cells by a flow ra te. An event is only trig-gered when the flow rate cha nges. Since flow rate changeshappen much less frequently tha n cell arrivals, considerablesavings in computing and memory resources are achieved [1].

    A fluid model that is typically used to model traffic is theMa rkov modulated fluid model. In this model, the currentstate of the underlying Markov chain determines the flow (traffic) ra te. While in state s k , traffic arrives at a constantra te k . This model is a Ma rkov modulated constant ratemodel and is used in [7, 8] to model VBR video sources.

    In [7], the continuous bit ra te is quantized into a f inite set ofdiscrete levels and sampled at random Poisson points (i.e., inter-sample time is exponentially distributed), as shown in Fig. 4. Thenumber of states in the Ma rkov chain is equal to t he number ofquantized levels. Since Markov processes have exponentiallydecaying auto-covariance function , the a uto-covariance of theempirical data is approximated by C () = Ce a .

    There are many Ma rkov chains that satisfy the above auto-covariance function and the average of the empirical data.The birth-death Markov chain shown in Fig. 5 is used for itssimplicity in [8]. In this model, the bit rate while in state i isconstant and is given by iA, where A is the quantization stepsize. The transition rates are chosen such that lower bit-rate-states tend to jump to higher-bit-rat e stat es and vice-versa.This model captured approximately the first 10 lags of the

    auto-correlation function of the empirical da ta. This is due toa faster decay in the auto-correlation function of the modelthan the auto-correlation function of the actual data. More-over, jumps are o nly allowed to neighboring states in birth-death Markov chain, so the model lacks the ability to captureabrupt changes in the arrival rate between frames.

    In order to capture scene changes in the above model, [7]extended the model by allowing the rate to be integer multi-ples of two basic levels: high level A h , and low level A l . It usesa t wo-dimensional M arkov chain in which the sta te is definedby two indices i and j , where 0 i M and 0 j N . Whilein state ( i , j ), the flow rate is iA l + jA h , Fig. 6 illustrates thecase when M = 1 a nd N = N p .

    Figu re 2. a) ON-O FF model and b) I PP model.

    Figu re 3. M M PP process.

  • 8/13/2019 10.1.1.23.1461

    3/8

    IEE E C ommunications Magazine July 199784

    The q ueuing performance o f t his model is still ana lyticallytracta ble and it has been considered in [8]. The model ha smany parameters, and exponentially decaying auto-correlationfunction. The complexity of a nalytical solution increases byadd ing more a ctivity levels.

    REGRESSION M ODELS

    R egression mod els define explicitly the next ra ndom vari-able in t he sequence by previous ones within a specified

    time window and a moving average of a white noise. In thissection several regression models are presented.

    AUTOREGRESSIVE MODELSThe Auto regressive model of order p , denoted as AR( p ), hasthe following form:

    X t = 1 X t 1 + 2X t 2 + + p X t p + t , (1)where t is white noise, j s are real numbers, and X t s are pre-scribed correlated random variables. If t is a white G aussiannoise with variance t 2, then X t s will be normally distributedrandom variables. Let us define a lag operator B a s X t 1 =BX t , and let (B ) be a polynomial in the operat or B , definedas follows: (B ) = (1 1B p B p ). Then, the AR ( p ) pro-cess can be represented as:

    (B )X t = t . (2)The process { X t } is stationary if the roots of (B ) lie outsidethe unit circle [10]. The auto-correlation k can be computedby multiplying Eq . (1) with X t k , taking the expectation, anddividing by the variance 0:

    k = 1k 1 + 2k 2 + + p k p ; for k > 0. (3)Thus, the genera l solution is

    k = A1G 1k + A2G 2k + +Ap G p p , (4)where G i

    1s are the roots of (B ). Therefore, the auto-correla-t ion function of AR ( p ) process will consist, in genera l, ofda mped exponentia ls, an d/or da mped sine waves depend ing

    on whether the root s are real or imaginary.Since successive video frames do not vary much visually,AR models have been used to model the output bit rate o fVBR encoder [7, 11]. In [7], a video source is approximatedby a continuous fluid flow model. In the model, the output bitrate within a fra me period is constant a nd changes from frameto frame according to the following AR(1) model:

    [n ] = [n 1] + b [n ], (5)where [n ] is the bit ra te during frame n and [n ] is a G aus-sian white noise. [n ] is chosen such that the probability of[n ] being negative is very small. Since the number of bits inframe n cannot be nega tive, the value of [n ] in Eq. 5 is set to

    zero, whenever [n ] is negative. This model, cannot captureabrupt changes in the frame bit rates that occur due to scenechanges or visual discontinuities. Therefore, one may modelthe bit rate o f fra mes within the scene as an AR process andmodel the scene changes by an underlying Markov chain.

    In [11], VBR video tra ffic is modeled as X n = Y n + Z n +V n C n . Y n and Z n are two independent AR(1) processes. Thesetwo AR processes are used to get a better fit of the auto-cor-relation function of the empirical dat a t han using only oneAR (1) process. The last term, V n C n , is the product of a 3-stateMarkov chain and an independent normal random variable. Itis designed to capture sample path spikes due to video scenechanges. This may work for encoding techniques in which onlythe difference between the frames and a reference frame isencoded and transmitted. Reference frames are transmittedwhen the difference is greater than a certain threshold. They,in general, indicate a scene change. Reference frames havehigher bit rates than the other surrounding frames and theycause the spikes in the sample path of the bit rate of the videostream. In MP EG , a scene change detection is not trivial dueto periodically transmitting a reference frame (I frame).

    Although it is easy to estimate the AR model parametersand t o genera te the sequence recursively, the exponentia ldecay of the au to -cor re la t ion func t ion makes the modelunable to capture auto-correlation functions that decay at aslower rate t han t he exponential. AR is approximated in [7]by a M arkov modulated f luid model, in order to obtain a nalyt-ical queuing performance results. AR processes with G aussiandistribution cannot capture VBR video traffic probability dis-tribution. Since VBR video traffic distribution exhibits a heav-ier tail behavior than the G aussian.

    DISCRETE AUTOREGRESSIVE MODELSA D iscrete Autoregressive model of o rder p , denoted asD A R (p ), generates a stationary sequence of discrete randomvariables with a n arbitrary proba bility distribution a nd with a nauto-correlation structure similar to that of a n AR ( p ).

    D AR (1) i s a spec ia l case of D AR( p ) process and i t isdefined as follows: let { V n } a n d {Y n } be two sequences ofindependent rando m variables. The rando m variable V n ca ntake t wo va lues 0 and 1 , wi th proba bi l it i es , 1 a n d ,respectively. The random variable Y n has a discrete s tatespace S and P {Y n = i } = ( i ). The sequence of random vari-ables { X n } which is formed according to the linear model:

    X n = V n X n 1 + (1 V n )Y n .

    is a DAR(1) process. DAR(1) process is a Markov chain with

    discrete state space S and a transition matrix:P = I + (1 )Q ,

    where I is the identity matrix and Q is a matrix with for Q ij =( j ) for i , j S . D AR(1) has a correlation structure of a first-order autoregressive process with k = k and has the probabilitydistribution function of . In [1214] the number of cells perframe of teleconferencing VBR video is modeled by DAR(1)with negat ive binomial distribution. The rows o f the Q matrixconsist of the negative-binomial probabilities ( f 0, f 1, f k , F K

    c ),where F K

    c = k > K f k , and K is the peak rat e. Therefore, on lymean, variance, peak rate, and the first auto-correlation coef-ficient are needed to be estimated from the d ata .

    Figu re 4. Approxim ation of con tinuous source rate (t) using Poisson sampl ing and quantization by modul ated flui d flow (t).

    Figu re 5. State transition di agram for birth -death M arkov chain.

  • 8/13/2019 10.1.1.23.1461

    4/8

    IEE E C ommunications Magazine July 1997 85

    D AR(1) has far less number of pa rameters thanthe general Markov chains. The parameter estima-tion is simple. The distribution of the resulting pro-cess is arbitrary. Moreo ver, the ana lytical queuingperformance is t ractable. On the other hand, theauto-correlation function decays exponentially andhence it cannot be used to model traffic with a slowerauto-correlation decay.

    AUTOREGRESSIVE MOVING AVERAGE MODELSAn Autoregressive Moving Average model of order(p , q ), denoted as ARMA( p , q ), has the form

    X t = 1X t 1 + 2X t 2+ + p X tp + t 1t 1 2t 2 q tq , (6)

    which can be equivalently represented as:

    (B )X t = (B )t . (7)where B and (B ) are as defined previously, and (B ) = (1 1B q B q ).

    This is equivalent to filtering a white no ise process t by acausal linear shift time invariant filter having a rational systemfunction with p poles and q zeros [15]; that is,

    (8)

    The auto-covariance k of the ARMA ( p , q ) process can beobtained by multiplying Eq. 6 with X t k , taking the expecta-tion, and finding the cross-correlation between t and X t

    k = 1 k 1 + + p kp 2(k h 0 + k + 1h 1 + + q h qk ) (9)

    where h t is the impulse response of the ARMA( p , q ) filter H( z ).Note that k = 0 f or k > q , therefore, the auto-correlation

    of the process for k > q

    k = 1k 1 + 2k 2 + + p k p ; for k > q , (10)which is the same difference equation as Eq. 3, therefore, the

    auto-correlation of the AR MA( p , q ) deca ys exponentially.An AR MA model is used in [16] to model VB R traf fic.The duration of a video frame is equally divided into m timeintervals. The number of cells in the n th time interval is mod-eled by the following ARMA process:

    Since video data will correlate at each frame due to temporalcorrelation, the auto-correlation function has peaks at all lagswhich are integer multiples of m . In the above model, the ARpart is used to model the recorrelation effect and the k s areused to fit the correlation at other lags.

    The parameter est imation of AR MA models are moreinvolved than t hat o f AR models, the estimation of the k srequire solving a set of no n-linear equa tions or using spectralfactorization techniques [15]. The analytical solutions are alsodifficult to obtain.

    AUTOREGRESSIVE INTEGRATED M OVING AVERAGE MODELSThe Autoregressive Integrated Moving Average model oforder ( p , d , q ), denoted as ARI MA( p , d , q ), is an extension tothe ARMA( p , q ). It is obtained by allowing the polynomial(B ) to have d roots eq ual to unity. The rest of the roots lieoutside the unit circle. The ARIMA( p , d , q ) has the form:

    (B )d X t = (B )t . (11)where is a difference operator, defined as ( X t X t 1) = X t ,

    and (B ) is a polynomial in B . Notice that X t = (1 B )X t .The AR IMA( p , d , q ) is used to model homogeneous nonstation-ary time series. For example, a time series tha t exhibits nonsta-tionarity in level, or in level and slope, can be modeled by usingARIMA( p , 1, q ) and ARIMA( p , 2, q ), respectively [17].

    TES M ODELSTransform-expand-sample (TES) models are non-linear regres-sion models with modulo-1 arithmetic. They aim to capture bothauto-correlation a nd ma rginal distribution of empirical data .

    TES models consist of two major TES processes [1, 18, 19]:TES + and TES . TES + produces a sequence which has posi-tive correlation at lag 1, while TES produces a negative cor-relation at lag 1.

    Befo re describing TE S + or TE S , we need to introduce a few definitions and no tat ions. The modulo-1 of a real number x ,denoted as < x > , is defined as < x > = x x , where x is themaximum integer less than x . Therefore, < x > is always non-nega-tive. If t he interval [0,1) is viewed a s a circle tha t is obta ined byjoining the points 0 and 1, one ca n define a circular intervalC [a, b), where a and b [0,1), as all the points on the circularunit interval going clockwise from point a to point b. Therefore:

    TES + and TES T ES + (L , R ) is introduced in [19] and is

    characterized by two para meters , L and R . The seq uence{U n + } is generated recursively as follows: initialize U 0+ = U 0,where U 0 is uniform in the interval (0,1). Then U n + is a uni-formly sampled random variable on the circular interval C U n += [ < U n 1+ L > , < U n 1+ + R > ).

    In the TES (L , R ), the sequence is generated a s in TES +with U n is uniform random variable over the circular interval:

    TES + and TES can also be characterized by = L + R , and= (R L )/. Note that represents the length o f t he circu-lar interval. The sample path realizations generated by simula-tion using TE S + and TE S have shown discontinuity due to the

    crossing of the 0 point on the unit circular interval from bothdirections. For example, crossing clockwise will result in a jumpfrom small values to large values. It was shown in [19] that acontinuous sample path rea lization can be obtained by using asimple piece wise transformation T called stitching, where:

    Autocorrelation of TES + and TES The lag-1 auto-cor-relation for TES processes is derived in [19] and is given by:

    T =

    x , x [0, )

    1 x1

    , x [,1)

    C U n 1 [a, b ) =[,), n even[< 1 U n1 R >,), n odd

    C [a , b ) =[a, b ), if a b[0,1) [b , a ), if a > b

    X n = X nm + k nk k =0

    m 1

    H ( z) = Bq ( z) A p ( z) = 1 k z

    k k =0q

    1 p zk k =1 p.

    Figu re 6. Fluid flow m odel for two level of acti vity VBR sources.

  • 8/13/2019 10.1.1.23.1461

    5/8

    IEE E C ommunications Magazine July 199786

    (12)

    (13)

    The auto-correlation for higher-order lags is simulated in [19]and its computation is presented in [18]. The value of affectsthe magnitude of the correlation, while the value of affectsthe oscillating behavior of the auto-correlation. The larger the, the smaller the magnitude. If = 0, there will be no oscil-lation. For 0, the larger the , the faster the oscillation.

    In [18], the definitions in [19] are generalized. The recur-sive construction of the underlying TES processes is definedas follows:

    Here, { V n } is a sequence of independent identically distribut-ed rando m variables independent from U 0. The resultingsequences { U n + } and {U n } are uniformly distributed in [0, 1)no matter wha t the density function of V n , denoted as f V [1,18]. The choice of f V will result in a d ifferent correlat ionstructure of the resulting process.

    The targeted sequences { X n + } and {X n } are then obtainedby using the inversions { X n + } = D (U n + ) and, { X n } = D (U n ),where D = F 1 a n d F i s the marg ina l d i s t r ibu t ion of t hedesired sequence (the empirical data). The fitting of the auto-correlation is done by a heuristic search for a pa ir ( , f v ) [1].A combination of TES processes can also be used to better fitthe auto-correlation. The empirical distribution is matchedusing the distribution inversion methods. The auto-correlationfunction of TES processes decays exponentially.

    LONG -RANGE DEPENDENT TRAFFIC MODELS

    S tationary traffic models presented in the second and thirdsections have a correlation structure that is chara cterizedby an exponential decay. A recent a nalysis of tra ffic measure-ments of Ethernet LAN traffic [20] and NSFNET [21], hassuggested that the auto-correlation decays to zero at a slowerrate than exponential. This slow decay in correlation has beenobserved befo re in o ther sta tistical a pplications, e.g., hydrolo-gy [22]. Mathematical models have been developed to capturethis behavior [23, 24].

    BACKGROUND ON LONG -RANGE DEPENDENCELet {X t } , t = 0, 1, 2, be a wide-sense stationary stochas-

    tic process, i.e., a process with a sta tionary mea n = E [X t ], astationary and finite variance v = E [(X t )2], and a stationary

    auto-covariance function k = E [( X t )( X t + k )], thatdepends only on k and not on t . Observe that v = 0. Let theauto-correlation of { X t } at lag k be denoted as k , where bydefinition, k = k / 0.

    For each m , let { X j (m )} denote a new time series obtainedby averaging the original series { X t } over non-overlappingblocks of size m , i.e.,

    (14)

    Observe that X j (m ) is the sample mean of S jm m + 1 + + X jm . Let v m , denote the variance of { X j (m )} and it is givenby [17, 25]:

    (15)

    (16)

    (17)

    Hence, if the process is white noise, X jm m + 1, , X jm willbe mutually uncorrelated, i.e., k = 0 for k > 0, and v m = vm 1.

    For large m Eq. 17 can be approximated by:

    (18)

    Co nsider the case in which k 0 and

    k= - k < . Thevariance of t he sample mean will asymptotically decay to zeroproportional to m 1, i.e.,

    v m vc m 1, (19)

    where c is constant. Time series models such as AR MA a ndMarkov processes have a sample mean variance that decaysasymptotically according to Eq. 19.

    Recently, several traffic measurements (see, e.g., [20, 21,26]) have shown that the sample mean variance, v m , decays ata slower rate than m 1 . One simple approach t o model thisdecay explicitly, is to have v m decay proportional to m forsome (0, 1). This requires that k = 1m to be proportional tom 1 , that is,

    Since < 1, this implies

    .Thus, the auto-correlation decays slowly in a way that it is not

    summable. An example of such an auto-correlation function isk C | k | , for large k . (21)

    Short-Range and Long-Range Dependence The process{X t } is said to have a short-range dependence, if k k < .Equivalently, v m decays asymptotically proportional to m 1, thepower spectral density has a finite value at zero, and the averagedprocess { X t (m )}, tends to second order pure noise as m . Pro-cesses that have auto-correlation functions that decay exponen-tially, are short-range dependent processes. a) shows an exampleof an auto-correlation of a short-range dependent process.

    The process { X t } is said to have long-range dependence, ifk k [17]. Since the power spectral density function isdefined as k k e j k . Therefore, the power spectral densityfunction is singular near zero, that is, it increases without a

    limit a s the frequency tends to zero. The variance of the sam-ple mean v m decays at a slower rate tha n m 1 . For example,processes in which k ~ Ck (for large k ), where (0, 1)are long-range dependent and their v m decay proportional tom . Figure 7b shows an a uto-correlation function of a long-range dependent process.

    It is important to note tha t the d efinition of long-memorydependence is an asymptotic definition [25]. It only describesthe behavior of the auto-correlation at large lags. It does notspecify the auto-correlation for any fixed finite lag.

    Self-Simil arit y The process { X t } is said to be exactly (sec-ond-order) self-similar if k (m ) for all m and k i.e., the correla-

    k k =

    k Cm1 .

    k =1

    m

    v m = v 2 k k =1

    m

    m 1 .

    = v 1+2 (1 k m

    )k k =1

    m

    m1

    = vm

    + 2m2

    (m k ) k ,k =1

    m

    v m = E 1m

    ( X jmm +1 +L + X jm )2

    E 1m

    ( X jmm +1 +L + X jm )2

    ,

    X j( m) = 1

    m( X jmm +1 +L + X jm ).

    U n+ =

    U 0 , n =0< U n1+ + V n >, n >0

    U n =

    U n+, n even

    1 U n+, n odd

    1( ,) = 1 +3 +32

    2 1+3

    2

    2 2 .

    1+( ,) =13 +32

    2 +1 +3

    2

    2 2 .

  • 8/13/2019 10.1.1.23.1461

    6/8

    IEE E C ommunications Magazine July 1997 87

    tion structure is preserved a cross different time scales. Theprocess { X t } is said to be asymptotically self similar, if k (m ) k , for m and k large [27].

    Stochastic self-similar processes retain the same statisticsover a range of scales, and they satisfy the following relation:

    {X at } =D

    a H {X t } ,

    where =D

    denotes equa lity in distribution a nd H is called theHurst parameter [28]. Therefore, the sample paths appear tobe qualitatively the same, irrespective of the time scale. Thisdoes not mean tha t the same picture repeats itself exactly asin deterministic self-similarity. It is the genera l impressionodds that remains the same [25].

    If {X t } has stationary increments, then the increment pro-cess Y t = X t X t 1 has an a uto-correlation of the form:

    k H (2H 1) k 2H 2 , as k (22)This can be verified by using the self-similarity definition { X t }=D

    t H {X 1} , defining 2 = E (X t X t 1 )2 = E (X 12) the varianceof t he increment process { Y t }, and finding the covariance ofthe increment process { Y t }[25]. By comparing Eq. 22 to Eq.21, th en H = 1 /2. Note t ha t fo r H (0, 1) and H 1/2,k k . Fract iona l G aussian no ise is an example of a nexactly self-similar process and Fra ct ional AR IM A is anexample of an asymptotically self-similar process.

    FRACTIONAL ARIMAThe fra ctional Auto regressive Int egrated Moving Averageprocess, F-ARIMA( p , d , q ) with 0 < d < 1/2, is an example ofa stat ionary process with long-range dependence. I t is anextension to ARI MA( p , d , q ) and defined as:

    (B )d X t = (B )t , (23)where d can ta ke values between 0 a nd 1/2. The opera tor d = (1 B )d can be expressed using the binomial expansion [25]

    (24)

    (25)where (x ) denotes the gamma function. Note that f or all pos-itive integers, only the first d + 1 terms are non-zero in Eq.25. That is because the gamma function has poles for negativeintegers and hence the binomial coeff icients are zero if k > d and d is an integer. The representatio n in Eq . 25 for d isequivalent to an infinite order autoregressive process (all polefilter with an infinite order). F-ARIMA(0, d , 0) process with 0< d < 1/2, is stationa ry and long-range depend ent, with a nauto -correlat ion functio n [29]

    (26)

    Observe that for 0 < d < 1/2, the hyperbolic de cay will pro-

    duce persistence. By compa ring Eq . 26 to E q. 21, d = (1 )/2 = H 0.5. F-AR IMA processes can mo del short-rangeand long-range dependence. I f G aussian white noise is used,then the F-ARI MA ha s a G aussian distribution. This limitsthe ability of F-ARIM A to mod el processes which have anapproximately G aussian distributions. The G aussian whitenoise is used because the sum of two G aussian ra ndom vari-ables is a G aussian ra ndom variable. This is also true for aclass of random variables called Stable random variables thatinclude G aussian ra ndom variables [28].

    One way to estimate d is the variance-time-plot method. In thismethod, the Var ({X (m )} ) = v m is plotted versus the aggregat ionlevel m (on log-log scale). The asymptotic slope is then, esti-

    mated and the value of d = (1 )/2 is obtained . F-AR IM Awas used to model VBR video traffic [ 26, 30] and the queuingperformance of F-ARIMA(1, d , 0) was a nalyzed through sim-ulation for a first come first served (FCFS) queue in [31].

    FRACTIONAL BROWNIAN M OTIONB rownian motion is a stochastic process, denoted, { B t } , for t 0. It is charact erized by the following propert ies [32]:

    The increments B t + t 0 B t 0 are no rmally distributed withmean 0 and variance 2t . The increments in non-overlapping time intervals [ t 1, t 2]

    and [ t 3, t 4], i.e., B t 4 B t 3 and B t 2 B t 1 are independentrandom variables.

    B 0 = 0 a nd B t is continuous at t = 0.The fractional Brownian motion { fB t } is a G aussian self-

    similar pro cess with self-similarity pa ramet er H [0.5, 1) [33].F r a c t i o n a l B r o w n i a n m o t i o n d i f f e r s f r o m t h e B r o w n i a nmotion by having increments with variance 2t 2H . Define 2 =E { (fB t fB t 1)2} = E { (fB 1 fB 0)2} = E {fB 1

    2} the variance ofthe increment process (Note that fB 0 = 0). Then:

    E { (fB t 2 fB t 1)2} = E { (fB 2 t 1 fB 0)2} = 2(t 2 t 1)2H .Also:

    E { (fB t 2 fB t 1)2} = E { (fB t 22} + E { (fB t 12} 2E {fB t 2 fB t 1}= 2t 22H + 2t 22H 2 ( fB t1 , fB t2 ) ,

    (fB t 1, fB t 2) = 1/2 2( t 22H ( t 2 t 1)2H + t 12H ).Hence, the covariance of increments in two nonoverlappingintervals is given by:

    (fB t 4 fB t 3, fB t 2 fB t 1) = (fB t 4, fB t 2) (fB t 4, fB t 1) (fB t 3, fB t 2)

    + (fB t 3, fB t 1), (27)= 2/2( t 4 t 1)2H ( t 3 t 1)2H

    + (t 3 t 2)2H ( t 4 t 2)2H .

    k = (1 d ) ( k + d ) (d ) (k +1 d ) ~

    (1 d ) (d ) k

    2 d 1 as k .

    d k

    =

    d !k !(d k )! =

    (d +1) (k +1) (d k +1)

    (1 B)d = d k

    k =0

    ( 1) k Bk ,

    Figu re 7. Examples of auto-correlation structures of a) short- range dependent process, b) long-range dependent process.

  • 8/13/2019 10.1.1.23.1461

    7/8

    IEE E C ommunications Magazine July 199788

    The fractional Brownian motion { fB t } can be deduced fromBrownian motion { B t } [24, 25] by forming the int egra l:

    As Eq. 28 shows, the interdependence between the incrementsof the fractional Brownian motion can said to be infinite.

    In t he discrete case [25], the a uto-correlation of the incre-ment sequence is obtained by replacing t 1, t 2, t 3, and t 4 in Eq. 27by n , n + 1, n + k , and n + k + 1 respectively, and dividing by 2:

    k = 1/2[( k + 1)2H 2k 2H + (k 1) 2H ].The increment sequence is called fra ctional G aussian noise.The auto-correlation in Eq. 29 exhibits long-range dependence,since k ~ k 2H 2 as k (follows by Taylor expansion). Theuse of fractional Brownian motion to model traffic is present-ed in [33]. The analytical solution for the distribution of thebuffer occupancy is difficult [33]. Therefore, an approximationfor the tail behavior is obtained. Results show that for largeH , increasing the utilization requires a significant amount ofstorage space. The probability of cell loss decreases algebraical-ly with buffer size and not exponentially as Markovian andARM A models do. The Hurst para meter, H, can be estimatedusing the variance-time-plot with H = 1 /2, wh ere is theslope of the plot . Fo r genera ting fB m processes see [24, 34].

    SUPERPOSITION OF HIGH VARIABILITY ON-OFF S OURCESThe traditional ON-OFF source models assume finite vari-ance distributions for the sojourn time in ON a nd O FF peri-ods. As a result , the aggregation o f large number of suchsources will not have significant correlation, except possibly inthe short range [35].

    An extension to such tra ditional ON-OFF models was firstintroduced by [36, 37] (as cited in [35]) by allowing the ONand O FF periods to ha ve infinite variance ( high variability or Noah E ffect ). The superposition of many such sources pro-duces aggregate tra ffic tha t exhibits long-range dependence(also called the Joseph E ffect [35, 38]).

    The source model used in [38] can be described as follows: D enote y t (i ) the cell generation rate for source i at t ime t .

    So urce i will transmit cells with ra te R if it is in the ONperiod and will not transmit while in the OFF period. The t ime spent in the ON per iods is an independent

    identically distributed random variables, denoted i , andthe distribution is a Pareto-type with finite mean a andinfinite variance, i.e.,

    Pr { > t } ~ t , t , 1 < < 2. (30) The OFF periods are identically distributed random vari-

    ables with a generic distribution (i ) with finite mean a .Let Y t the tota l cell rate generated by M independent ON-OFF sources be:

    To a void an in f in i te t ra ff ic in tens i ty when M , a isincreased in such a wa y that = M /( a + a ) is constant .Hence, in the limit as M , E {Y t } = R a .

    It can be shown [38] that the a ggregate process is long-range dependent and asymptotically self-similar with Hurstparameter H = (3 )/2 > 0.5, the auto -correlat ion k ~ k 1for large k , and the limit as m , k (m ) ~ 1/2[( k + 1)3 2k 3 + (k 1) 3 ].

    Analytical results for the queue length distribution ofnumber o f source s in the que ue is obta ined using M/G /1

    queuing model. The probability of cell loss for large buffersizes is given by (No proof was presented):

    (31)

    where L is the buffer size. Notice that the loss probabilitydecreases with L algebraically and not exponentially as tradi-tional Ma rkovian traf fic models do.

    In [35], long-range dependent t raf fic (fractiona l G aussiannoise) with H = ( 3 )/2 is obta ined by the a ggregat e of alarge number of ON-OFF sources in which the ON and OFFperiods have a Pareto type distribution with infinite variance.The a nalysis of two sets of high time-resolution t raff ic mea-surements from two E thernet LANs shows that the da ta a tthe source level have high variability ( Noah E ffect ).

    CONCLUSIONS

    T raffic models are used in traffic engineering to predict net-work performance and to evaluate congest ion controlschemes. Traf fic models vary in their ability to model variouscorrelation structures and ma rginal distributions. Models thatdo not capture the statistical characteristics of the actual traf-fic result in poor network performance because they eitherover estimate, or under estimate the network performance.Traffic models must have a manageable number of parame-ters and the estimation of these parameters needs to be sim-ple. Traffic models which are not analytically tractable canonly be used to generate traffic traces. These traffic traces canbe used in simulations.

    Traffic models can be stationary or nonstationary. Station-ary tra ffic models can be classified into two classes: short-range a nd long-range dependent . Short -range dependentmodels include traditional t raff ic models such as M arkov pro-cesses and regression mo dels. These tra ffic mo dels are d is-cussed in the second a nd third sections. The second sectiondescribed Markov chains, semi-Ma rkov chains, and M arkovmodulated processes. The third section described AR, ARMA,ARIMA, TES, and DAR regression models. Traditional traf-

    fic models are characterized by an auto-correlation structurethat decays exponentially so that k k < . This results in thevariance of t he sample mean, vm, to decay as m 1 for large m ,or equivalently, they tend to white noise for large m .

    Long-range dependent tra ffic models, such as fra ctionalARIMA and fractional Brownian motion, are characterized byan a uto-correlation structure that decays at a slower rat e sucht h a t k k . This results in the va riance o f the sa mplemean, v m , to decay at a slower rate than m 1 even for large m .

    Markovian and regression traffic models are short-rangedependent traffic models. The queuing performance of theMarkovian models is analytically tractable, while, in general,that of regression models is not. The computational complexi-ty of the analytical solution of Markovian models increases asthe number of states in the model increases. Since a small

    number of sta tes is used to model voice sources, Ma rkovianmodels are widely used in telephony. On the other hand, it isnon-trivial to capture arbitrary probability distribution inMarkovian models. Moreover, they can not be used to modeltraffic that exhibits long-range dependence.

    R egression models are simple to generate. Therefore, theyare used in simulations. Q ueuing of regression mod els is usual-ly intractable. Therefore, regression models are often approxi-mated by Ma rkovian mod els in order to get a pproximateanalytical solution. Regression models define the next randomvariable in the sequence by an explicit function of previousrando m variables. Hence, they are used to model sequencesthat do not vary much between successive observations, e.g.,

    P L = c

    (+1) B( Ra )

    1 + L1,

    Y t = yt ii =1

    M

    E {Y t } = MR aa +a .

    fBt = (t u) H 0.5

    dB(u ).0t

  • 8/13/2019 10.1.1.23.1461

    8/8

    IEE E C ommunications Magazine July 1997 89

    number of bits/frame o f teleconferencing VBR video. AR ,ARMA, DAR, and TES regression processes can only modelstationary processes, while ARIMA regression process canmodel both stationary and some non-stationary processes.

    In general, AR, AR MA, and AR IMA processes use G aussianwhite noise because the sum of G aussian rando m variables is aG aussian. Therefore, in order t o model a n a rbitrary distribution,a t wo-step tra nsformation is needed to transform the resultingprocess from the G aussian to the desired distribution. This tra ns-formation does not guarantee that the transformed process willhave the same correlation structure as the original one.

    D A R ( p ) and TES models have arbi tra ry distr ibutions.D AR models have auto-correlation structures similar to thatof AR(p ) processes. They also enjoy a nalytical tr acta bility byusing their eq uivalent M arkov chain mod els. TES models arenon-linear regression models tha t use mod ulo-1 arithmetic.They can generate processes with different correlation struc-tures with uniform probability distributions. TES models use aheuristic search to f ind the best TES process that can capt ureboth a uto-correlation structure and d istribution o f the empiri-cal data. Recently, an automated solution was presented [39].

    Examples of long-range dependent traffic models are givenin (the fourth section). These include fractiona l Bro wnianmotion, aggregate o f O N-OFF high variability sources, and F-ARIMA. Fractional Brownian motion has only one parametercontrolling the auto-correlation function. Therefore, there isno flexibility in modeling short-range dependence. The aggre-gation of large numbers of ON-OFF sources with infinite vari-ance for the ON and OFF per iods exhib i t s long- rangedependence , and can be used to cap ture the a sympto t icbehavior of long-range dependent traffic. H owever, it is notclear how it will model the short-range behavior. F-AR IMAmodels have three parameters p , d , and q that control theauto-correlation structure. Therefore, they can capture bothshort-range and long-range dependence. The parameter d deter-mines the long term behavior, ( H = d + 1/2), whereas p and q allow fo r more flexibility in modeling the short-range proper-ties [25]. Fractional B rownian mot ion (fractional G aussiannoise) and the G aussian F-AR IMA ha ve a G aussian distribution

    and cannot match a rbitrary empirical distributions. Hence,inversion methods need to be used. These methods, in gener-al, affect the correlation structure of the original process.

    I t a ppears tha t a model tha t can cap ture shor t - rangedependence, long-range dependence, and an arbitrary distri-bution is needed. A systematic and simple method that candecouple the estimation of long-range and short-range param-eters in the model needs to be developed.

    Finally, a nalytical performance solutions for nontraditionaltraff ic models need to be investigated fo r a single node, aswell as for an end-to-end network model.

    ACKNOWLEDGMENTSpecial tha nks to Amarna th M ukherjee for his ideas, supervi-sion, and fruitful discussions throughout the development of

    this work.REFERENCES

    [1 ] V. F ros t and B . Melamed , Tra ff i c Mode l ing fo r Te lecommunica t ionsNetworks,IEEE Comm un. Mag .,Mar. 1994.

    [2 ] D . Lucan ton i , M. Neu t s , and A . Re ibman , Methods fo r Pe r fo rmanceEvaluat ion o f VBR Video Traff ic M odels ,IEEE/ACM Trans. Netw orkin g ,vol . 2 , Apr. 1994, pp. 17680.

    [3] L. Kleinrock,Queueing Systems , vol. 1, John Wiley and Sons, 1975.[4] H. Heffes and D. Lucantoni , A Markov Modulated Character izat ion of

    Packet ized Voice and Dat a Traff ic and Related Stat is t ical M ult iplexerPerformance, IEEE JSAC , Sept . 1986, p p. 85668.

    [5] I. Nikolaidis and I. Akyildiz, Source Characterization an d Statistical Mu lti-plexing in ATM Net w orks, Tech. Rep. GIT-CC 92-24 , Georgia Tech., 1992.

    [6] J. Hui,Switching and Traff ic Theory for Integrated Broadband Netwo rks ,Kluwer Academic, 1990.

    [ 7 ] B . M a g l a ri se t a l ., Pe r fo rmance M ode l s o f S t a t i st i ca l Mul t ip l ex ing inPacket Video Comm unications,IEEE Trans. Commun., vol. 36, July 1988.

    [8] P. Sen et a l ., M od els for Packet Sw itching of Variable-Bit-Rate VideoSources, IEEE JSAC , vol . 7 , no. 5 , 1989.

    [9] A. Papoul is ,Probabi l i ty, Rando m Variables , and Stochast ic Processes ,3rd ed. , M cGraw Hill , 1991.

    [10 ] G. Box, G. Jenkins, and G. Reinsel,Time Series Analysis , 3rd ed., Pren-tice Hall, 1994.

    [11] C. Shim et al., M odel ing and Cal l Admission Control Algori thm of Vari-able Bit Rate Video in ATM Netw orks,IEEE JSAC , vol. 12, Feb. 1994.

    [12] D. Cohen and D. Heyman, Perform ance Mod eling of Video Teleconfer-encing in ATM Netwo rks,IEEE Trans. Circuits and Sys. for Video Tech.,vol. 3, Dec. 1993, pp. 40820.

    [13 ] D . Heymane t a l ., M ode l ing Te leconf e rence Traff i c f rom VBR VideoCoders, Proc. ICC , IEEE, 1994, pp . 1744 48.

    [14] D. Heyman, A. Tabatabai, and T. Lakshm an, Stat istical Analysis and Sim -u la t ion S tudy o f Video Te leconfe rence Tra ff i c i n ATM Netwo rks, IEEE Trans. Circuits and Sys. for Vid eo Tech., vol. 2, Mar. 1992, pp. 4959.

    [15 ] M. Hayes ,S ta t i s t i cal D ig i t a l S igna l P rocess ing and M ode l ing , JohnWiley, 1996.

    [16] R. Grunenfelderet a l ., Character izat ion of Vid eo Codecs as Aut ore-gressive Mo ving A verage Processes and Related Queuing System Perfor-mance, IEEE JSAC , vol . 9 , Apr. 1991, pp. 28493.

    [ 1 7 ] D . R. C o x , L o n g - R a n g e D e p e n d e n c e : A R e v ie w, S t a t i s t i c s : A n Appraisa l, 1984, Iow a State Un iv. Press, pp. 5574.

    [18 ] D . J age rman and B . Melamed , The Trans i t i on and Au tocor re l a t ionStructu re of TES Processes, Commun. Stat.-Stochastic Models , 1992.

    [19] B. M elamed, TES: A Class of M ethod s for Generat ing Au tocorrelatedUniform Variates,ORSA J. Comp., vol . 3 , no. 4 , pp. 31729, 1991.

    [ 2 0 ] W . Le l a n de t a l ., O n t h e S el f - Si m i l a r N a t u r e o f Et h e r n e t Tr a f f i c(Extended Version),IEEE/ACM Trans. Netw ork ing , Feb. 1994, pp. 115.

    [21 ] S. Klivanski, A. M ukh erjee, and C. Son g, On Long -Range Depen dencein NSFNET Traffic, Tech. Rep. GIT-CC-94-61 , Georg ia Tech., 1994 .

    [22] B. B. M andelbrot and J . R. Wall is, Som e Long-Run Propert ies of Geo-ph ysical Record s, Wat er Resources Res. , vol . 5 , 1969, pp. 32140.

    [23 ] J. Hosking, M od eling Persistence in Hydrolo gical Time Series Using Frac-t i o n a l D i f f e r e n c i n g , Water Resources Res. , v o l . 2 0 , D e c. 1 9 8 4 , p p .18981908.

    [24 ] B . M ande lb ro t and J. Wal l i s, Comput e r Exper imen t s w i th F ract iona lGaussian Noises, Wat er Resources Res. , vol. 5, Feb. 1969, pp. 22867.

    [25] J. Beran,Stat istics fo r Long -Mem ory Processes , Chapm an and Hall, 1994.[26] J . Beran e t a l ., Long -Rang e Dependen ce in Variable-Bit-Rate Video

    Traffic, IEEE Trans. Commun., 1995, pp. 156679.[27 ] W. Wi l l inge r, When Tra ff i c Measuremen t s Defy Trad i t iona l Tra ff i c

    Models (and Vice Versa) : Traff ic Model ing for High-Speed Networks,Presentation at Georgia Tech, 1994.

    [28 ] G . Samorod n i t sky and M . Taqqu ,Stable Non-Gaussian Random Pro- cesses , Chapman and Hal l , 1994.

    [29] J. Hosking, Fractional Differencing,Biometrica , 1981, pp. 16576.[30 ] C. H. and M . Devets ikiot is , I. Lam bad aris , and A . Kaye, Self-Similar

    Mo del ing of Variable Bit Rate Com pressed Video: A Unif ied App roach,Proc. ACM SIGCOM 95 , 1995.

    [31] A. Ad as and A. M ukherjee, On Resource M anagement and QoS Guaran-tees for Long Range Dependent Traffic,Proc. IEEE INFOCOM 95 , 1995.

    [32] S. Karlin and H. Taylor,A First Course in Stochastic Processes , Academ-ic, 2 ed., 1975.

    [33] I . Norros , On th e Use of Fract ional Brow nian M otion in the Theory ofConnectionless Traffic,IEEE JSAC , Aug. 1995, pp. 95362.

    [34] S. Kogon and D. M anolakis, Eff ic ient Generat ion of Long-Memory Sig-nals Using Lattice Structu re, prep rint, 199 5.

    [35] W. Wil l ingeret a l ., Self-Similar i ty th rough High Variabi l i ty: Stat ist icalAnalysis of Eth ernet LAN Taff ic a t th e Sou rce Level ,Proc. ACM SIG- COMM 95 , 1995, pp. 10013.

    [36 ] B. M and elbro t, Long -Run Linearit y, Locally Gaussian Processes, H-Spec-tra and Infinite Variance,Intl. Economic Rev., vol. 10, 1969, pp. 82113.

    [37 ] M . Taqq u and J . Levy, Using Renewal Processes to Gen erate Long -Range Dependen ce and High Var i ab i l it y, Dependence in P robab i l i t y and Statistics , Boston, M A, 1986, pp. 73 89.

    [38] N. Likhano v, B. Tsybakov, and N. Georg anas, Analysis of an ATM Buf ferw ith Self-Similar (Fractal) Input Traffic,Proc. IEEE INFOCOM 95 , Apr. 1995.

    [ 3 9 ] B . M e l am e d a n d P. Je l en k o v i c , A u t o m a t e d TES M o d e l i n g o f Co m -pressed Video, Proc. IEEE INFOCOM 9 5 , 1995, pp. 74652.

    BIOGRAPHYABDELNASERADA S received a B.Sc. in electrical engineering f rom University ofJordan in 1988, an M.Sc. in electrical engineering from New Jersey Instituteof Technology in 1993 , and a Ph.D. f rom the Georgia Inst i tut e of Technolo-gy Electrical and Computer Engineering Department. Research interests arein network resource management , s ta t is t ical performance analysis , t raff icmod el ing, m ult imedia appl icat ions, and netw ork services to the ho me.