a control strategies for self-adaptive software systems · papadopoulos, suprio ray, amir m...

31
A Control Strategies for Self-Adaptive Software Systems Antonio Filieri, Martina Maggio, Konstantinos Angelopoulos, Nicol´ as D’Ippolito, Ilias Gerostathopoulos, Andreas Berndt Hempel, Henry Hoffmann, Pooyan Jamshidi, Evangelia Kalyvianaki, Cristian Klein, Filip Krikava, Sasa Misailovic, Alessandro V. Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity of software systems is challenging software engineering to de- sign systems that can adapt their behavior to withstand unpredictable, uncertain, and continuously chang- ing execution environments. Control theoretical adaptation mechanisms received a growing interest from the software engineering community in the last years for their mathematical grounding allowing formal guarantees on the behavior of the controlled systems. However, most of these mechanisms are tailored to specific applications and can hardly be generalized into broadly applicable software design and development processes. This paper discusses a reference control design process, from goal identification to the verification and validation of the controlled system. A taxonomy of the main control strategies is introduced, analyzing their applicability to software adaptation for both functional and non-functional goals. A brief extract on how to deal with uncertainty complements the discussion. Finally, the paper highlights a set of open challenges, both for the software engineering and the control theory research communities. CCS Concepts: Software and its engineering Software design engineering; Computing methodologies Modeling methodologies; Uncertainty quantification; 1. INTRODUCTION The pervasiveness of software in every context of life is placing new challenges to Software Engineering. Highly dynamic environments, rapidly changing requirements, unpredictable and uncertain operating conditions are pushing new paradigms for software design, leveraging runtime adaptation mechanisms to overcome the lack of knowledge at design time and design more robust software. The Software Engineering community has been working in the last decade on a multitude of approaches enhancing software systems with self-adaptation capabilities. However, most of them are focused on very specific problems and can hardly be gen- eralized to broadly applicable methodologies [Filieri et al. 2014]. Furthermore, many proposed adaptation mechanisms lack the theoretical grounding needed to assure their dependability besides the few specific cases they have been applied on [Hellerstein et al. 2010]. Control Theory has established in the last century a broad family of mathemati- cally grounded techniques for designing controllers making industrial plants behave as expected. These controllers can provide formal guarantees about their effectiveness, under precise assumptions on the operating conditions. Although the similarities between the control of an industrial plant and the adapta- tion of a software system are self-evident, most of the attempts of applying off-the-shelf control theoretical results to software systems achieved very limited results, mostly tailored to specific applications and lacking rigorous assessment of the adequacy of the applied control strategies. The two main obstacles to achieve a “control theory for soft- ware systems” are (i) the difficulty in abstracting a software behavior in a mathemat- ical form suitable for controller design, and (ii) the lack of methodologies in Software Engineering pursuing a controllability as a first class concern [Dorf and Bishop 2008; Diao et al. 2005; Zhu et al. 2009]. Analytical abstractions of software established for quality assurance often helps to fill the gap between software models and dynamic models (i.e., differential or differ- ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Upload: others

Post on 19-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A

Control Strategies for Self-Adaptive Software Systems

Antonio Filieri, Martina Maggio, Konstantinos Angelopoulos, Nicolas D’Ippolito, IliasGerostathopoulos, Andreas Berndt Hempel, Henry Hoffmann, Pooyan Jamshidi,Evangelia Kalyvianaki, Cristian Klein, Filip Krikava, Sasa Misailovic, Alessandro V.Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, ThomasVogel

The pervasiveness and growing complexity of software systems is challenging software engineering to de-sign systems that can adapt their behavior to withstand unpredictable, uncertain, and continuously chang-ing execution environments. Control theoretical adaptation mechanisms received a growing interest fromthe software engineering community in the last years for their mathematical grounding allowing formalguarantees on the behavior of the controlled systems. However, most of these mechanisms are tailored tospecific applications and can hardly be generalized into broadly applicable software design and developmentprocesses.

This paper discusses a reference control design process, from goal identification to the verification andvalidation of the controlled system. A taxonomy of the main control strategies is introduced, analyzing theirapplicability to software adaptation for both functional and non-functional goals. A brief extract on how todeal with uncertainty complements the discussion. Finally, the paper highlights a set of open challenges,both for the software engineering and the control theory research communities.

CCS Concepts: •Software and its engineering → Software design engineering; •Computingmethodologies→Modeling methodologies; Uncertainty quantification;

1. INTRODUCTIONThe pervasiveness of software in every context of life is placing new challenges toSoftware Engineering. Highly dynamic environments, rapidly changing requirements,unpredictable and uncertain operating conditions are pushing new paradigms forsoftware design, leveraging runtime adaptation mechanisms to overcome the lack ofknowledge at design time and design more robust software.

The Software Engineering community has been working in the last decade on amultitude of approaches enhancing software systems with self-adaptation capabilities.However, most of them are focused on very specific problems and can hardly be gen-eralized to broadly applicable methodologies [Filieri et al. 2014]. Furthermore, manyproposed adaptation mechanisms lack the theoretical grounding needed to assure theirdependability besides the few specific cases they have been applied on [Hellersteinet al. 2010].

Control Theory has established in the last century a broad family of mathemati-cally grounded techniques for designing controllers making industrial plants behaveas expected. These controllers can provide formal guarantees about their effectiveness,under precise assumptions on the operating conditions.

Although the similarities between the control of an industrial plant and the adapta-tion of a software system are self-evident, most of the attempts of applying off-the-shelfcontrol theoretical results to software systems achieved very limited results, mostlytailored to specific applications and lacking rigorous assessment of the adequacy of theapplied control strategies. The two main obstacles to achieve a “control theory for soft-ware systems” are (i) the difficulty in abstracting a software behavior in a mathemat-ical form suitable for controller design, and (ii) the lack of methodologies in SoftwareEngineering pursuing a controllability as a first class concern [Dorf and Bishop 2008;Diao et al. 2005; Zhu et al. 2009].

Analytical abstractions of software established for quality assurance often helps tofill the gap between software models and dynamic models (i.e., differential or differ-

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 2: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:2

ence equations) [Wang et al. 1996; D’Ippolito et al. 2010]. However, software engineersoften lack the mathematical background needed for a deeper understanding of thesemodels. Goal formalization and knob identification are additional concerns of model-ing, where specific characteristics have to be taken into account to achieve a suitablecontrollability of the system [Papadopoulos et al. 2014].

Several approaches have been trying to include control theoretical results into moregeneral methodologies for designing adaptive software systems. The pioneering worksin system engineering [Abdelzaher et al. 2008; Diao et al. 2006; Hellerstein et al.2004] had the merit of spotlighting how control theoretical results can improve thedesign of computing systems. These contributions had a significant impact especiallyon performance management and resource allocation. However, the new trends in self-adaptive software introduced new software models and a variety of quantitative andfunctional requirements beyond the scope of those works. More recently, methodolog-ical approaches for performance control [Parekh 2010] and the design of self-adaptiveoperating systems [Leva et al. 2013] have been proposed. Software Engineering andAutonomic Computing has also highlighted the centrality of feedback loops for adap-tive systems [Brun et al. 2009; Kephart and Chess 2003].

This paper extends our previously published contribution [Filieri et al. 2015b],where we provided a comprehensive analysis of the control theoretical design of self-adaptive software systems, matching the various phases of software development withthe relevant background and techniques from Control Theory (Sec. 2). The goal is tobootstrap the design of mathematically grounded controllers, sensors, and actuatorsfor self-adaptive software systems, to achieve formally provable effectiveness, effi-ciency, and robustness. More specifically, the aim of this paper is to provide an overviewof the control techniques that are well known in the control community and can be usedor have been used for the design of self-adaptive software. In preparing this overview,we took the control perspective and summarized the result of many years of researchin the control field.

To reach this aim, our previous work is here complemented by a detailed taxon-omy of the most important control strategies (Sec. 3), discussing their applicability tospecific software problems, and by a discussion of the related issue of modeling andformalizing uncertainty. In Section 4 we sketch some of the current contributions ofsoftware engineering to controller design, implementation, and validation that couldbe valuable in bridging the gap between the two disciplines. Also, we identify a set ofchallenges for control of self-adaptive software (Sec. 5).

2. CONTROL DESIGN PROCESSThis section discusses the design of a control loop for an existing software system. Thesection covers and summarizes the control-based design process proposed in [Filieriet al. 2015b], which the reader can refer to for additional details. The overall process isdepicted in Figure 1, while Figure 2 shows the resulting control scheme. In our formercontribution, we discussed more extensively some of the aspects of each step, providingalso an example of the design process [Filieri et al. 2015b]. Here, on the contrary, wesummarize the process and focus on a taxonomy of control strategies and on techniquesto handle the inherent uncertainty present in software systems.

The process starts with the definition of the goals of the system that will be used bythe controller as feedback signals and to measure and quantify when the adaptive sys-tem is fulfilling its objectives, denoted by y in Figure 2. The process continues with theidentification of the quantities that can be changed at runtime to realize the adapta-tion features, denoted by u in Figure 2. These are usually called “knobs” or “actuators”.

Based on the identified goals and knobs, the next step is to define a model for thesystem, denoted by P in Figure 2. Given the model of the system, the next step is to

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 3: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:3

Identify the goals

Identify the knobs

Devise the model

Design the controller

Implement and integrate the controller

Test and validate the

system

Level 1

Level 2

Level 3

1

2 3

4 5 6

Fig. 1. Steps in the design and development of a control-based mechanism for self-adaptive systems. Thedifferent levels group steps into activities with tighter coupling. This figure originally appeared in [Filieriet al. 2015b].

synthesize a controller C with the most appropriate method among the many differentones that control theory offers.

The next step is to analyze the closed loop system and to prove that its behavioris the desired one. The closed loop system should reach its goal(s) and be stable, andsufficiently accurate and robust to variations and changes. These properties should bealso verified in the presence of disturbances. If the desired properties are not exhibited,the process backtracks to one of the previous steps and a new iteration is started.Once the desired properties of the closed loop system are met, the controller shouldbe implemented and tested, both in isolation and integrated with the system undercontrol.

2.1. Identify the goalsThe first question that should be answered is what are the goals that one wants toachieve by building a control strategy on top of a software system. To build an effectivecontrol strategy the goals have to be quantifiable and measurable.

The simplest type of goal for an equation-based controller is a reference value totrack. In this case, the control objective is to keep a measurable quantity (responsetime, energy consumption, occupied disk space) as close as possible to the given ref-erence value, called setpoint.1 A second category of goals is a variation of the classicsetpoint-based goal, where the goal resides in a specific range of interest, e.g., the av-erage frame rate should be between 23 and 32 frames per second. Usually it is easy totransform these goals into equivalent setpoint-goals with confidence intervals.

A third broad category of goals concern the minimization (or maximization) of ameasurable quantity of the system. For example, we may want to deliver our servicewith the lowest possible energy consumption. Depending on the specific quantity undercontrol, certain optimization problems can be reduced again to setpoint tracking. Inparticular, if the range of the measured property to optimize is convex and bounded, asetpoint tracker required to keep the measured property at its minimum possible valuewill drive the process as close as possible to the target, that is, toward the minimumpossible value. However, optimization problems usually require more complex control

1Setpoint tracking is a very well studied problem in control theory [Levine 2010].

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 4: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:4

ControllerC

PlantP

y(k) + e(k) u(k) y(k)

d(k)

Fig. 2. Adaptive system: the control perspective. This figure originally appeared in [Filieri et al. 2015b].

strategies, especially when the optimization of one or more properties is subject toconstraints on the values of others, e.g., minimize the response time while keeping theavailability above a certain threshold.2

Special attention has to be paid when there are conflicting goals. Two simple typesof conflict resolution strategies are prioritization and cost function definition. In theformer, multiple goals are ranked according to their importance so that, whenever it isnot be feasible to satisfy all of them at once, the controller will give precedence to thesatisfaction of higher priority goals first.3 Cost functions are another common meansto specify the resolution of conflicting goals. In this case the utility function specifies allthe suitable tradeoffs between conflicting goals as the Pareto front of an optimizationproblem. Optimal controllers can handle this type of specification guaranteeing anoptimal resolution of conflicts.

2.2. Identify the knobsThe second step is to find the knobs that one can tweak in order to act towards thesatisfaction of the specified goals. In some cases, software systems exhibit the sameproperty and there are clear knobs that one can use to change the behavior of thesystem. In [Souza et al. 2011], an elicitation methodology is proposed, and the impactof each knob is evaluated with reference to the design goals.

Another important point when identifying the knobs that can be used for controlis also the identification of each knob’s timescale. Some of the knobs, like poweringup a new virtual machine, require time to be effective. The control strategy should beaware of that and base the action on prediction of the system’s behavior or wait untilthe effect of the chosen action is measurable and quantifiable in the system.

2.3. Devise the modelThe next step in the control design process is the choice of a proper model for thesoftware system under control. In general, to devise a control strategy, one would needa model of the relationship between the knobs identified in Section 2.2 and the goalsidentified in Section 2.1. Such a model represents the effect on the goals of a change inthe knob values.

In control theory, the devised model is analytic, therefore the interaction is formallydescribed by mathematical relationships. This could mean that logical formulas areused (especially for discrete event control strategies) or that dynamical models arewritten (generally for continuous- or discrete-time-based controllers).

Dynamical models are used to synthesize continuous- or discrete-time-based strate-gies. A dynamical model can be either in state space form or expressed as an input-output relationship.

In the first case, one must choose the state variables that keep track of the pastconditions in which the system was found and represent all the necessary knowledgeabout the system for understanding the evolution of its dynamics in time, for example

2In this case, optimal control, discussed in Section 3.2.2 is more appropriate.3Several control strategies support the achievement of prioritized goals, for example daisy chain con-trol [Levine 2010].

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 5: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:5

the length of a queue or the percentage of frames already encoded, or the current en-coding speed. The knowledge about the state variables’ current values and the inputvalues of the system allows one to formally determine the values of the output vari-ables [Papadopoulos et al. 2014]. The choice of the state variables may not be uniquesince an infinite number of equivalent representations can be found [Astrom and Mur-ray 2008]. Once the state variables are identified, the relationship between the inputand the state variables should be written, together with the relationship between thestate and the output variables. For example, if the input variable of a queue is thenumber of incoming requests and the output variable is the average service time, thestate variable is the number of already enqueued requests. By knowing how many re-quests arrived since the last measurement and the service time of each request, onecan determine the average service time of the request that were served in the last timeinterval analytically. The resulting equation is the system’s model.

In principle, one could have many equations describing the effect of different inputson different state variables and the effect of the state variables to the measurable out-puts. These form a system of equations and constitute the dynamical model [Astromand Murray 2008] that is used for the controller design. Depending on the structure ofthis system, there are different types of models: linear and nonlinear [Astrom 2008],switching, or parameter varying. Equations can also contain parameters that will varyover time. If the equations are linear, the corresponding systems can be analyzed asLinear Parameter Varying (LPV) [Sun et al. 2008] ones or Switching Systems [Liber-zon 2003]. In a switching system, there is a certain number of possible parameters andwhile the system is evolving in time, the parameters can switch between the prescribedalternatives. In LPV systems, usually, the alternatives are infinite and the parameterscan take any possible value.

In the second case, the system is expressed as a direct input-output relationship.Contrary to the state variables-based representation, this representation of the systemis unique. State-of-the-art techniques in system identification often permit to identifythe input-output relationship of a system directly from data, without using state vari-ables. In the linear case, the input-output relationship can be expressed as a transferfunction [Astrom and Murray 2008]. Transfer functions are very useful for control de-sign, because they encode the input-output relationship with an algebraic relationship,which in turn can be used to easily assess the satisfaction of desired properties of thesystem. The same properties can be assessed using state-space models, but requiresolving linear systems, which is harder than finding the roots of a polynomial (thestandard way to assess the stability of a system specified with a transfer function).

In writing the model, the possibility of disturbances acting on the system shouldalso be considered. Disturbances can be modeled together with the system or theirexistence can be acknowledged. In the first case, a better control strategy can be de-signed, for example coupling a feedback controller with a feedforward control strategythat measures the disturbance and acts to cancel it. In the second case, the feedbackcontrol strategy should be able to reject some of the disturbances.

From a software engineering perspective, a system at this phase is typically repre-sented by its structure in an architectural model (e.g., UML2, ADDL). Following a grey-box approach, only the details relevant for the specific adaptation concern are modeled.As an example, performance-based adaptation typically relies on performance predic-tion approaches for component-based architectures, which in turn rely on performance-annotated software architecture models [Balsamo et al. 2004; Brosig et al. 2015]. Ar-chitectural models are either directly employed or transformed into stochastic models(e.g., Queuing Networks [Tribastone 2013], Petri Nets [Ding et al. 2014], Markov mod-

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 6: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:6

els [Calinescu et al. 2012]) and used in the specification of the adaptation logic of thesystem.

A representative example of a modeling formalism used in software engineering,Markov models are a class of state-based models that can be generally used to describesystems that exhibit probabilistic behavior [Whittaker and Poore 1993]. There existsa number of Markov models. We categorize those models by the amount of controlavailable. Although the models described below belong to the discrete-time case, foreach model there exists also a continuous-time counterpart.

In cases of fully deterministic systems, we can use discrete-time Markov chains(DTMC). DTMCs describe the probability of moving between systems states and themodel itself does not include any controllable actions. There exist several methods em-ploying DTMCs in the context of controlling software [Calinescu et al. 2012; Epifaniet al. 2009]. A typical scenario includes verifying a DTMC model of a system against aproperty. If a violation of the property has been identified, a reconfiguration of the sys-tem is triggered. When some of the actions in the system are controllable, we can useMarkov decision processes (MDPs). MDPs extend Markov chains by modeling control-lable actions using non-determinism. The resolution of the non-determinism is thenused as a controller of the system. Controller synthesis for MDPs is a well researchedsubject with a number of synthesis method for various types of properties [Puterman1994; Baier et al. 2004; Brazdil et al. 2008; Kwiatkowska and Parker 2013]. Systemswith multiple players with conflicting objectives where players exhibit probabilisticbehavior can be formalized using stochastic games. Stochastic games can be seen asan extension of MDPs where the control over states is divided between several play-ers. Typically, we are interested in the resolution of the non-determinism that allows acertain player to achieve its objective despite possibly hostile actions of other players.

2.4. Design the controllerThere are different techniques that can be used to design a controller (in control en-gineering terms: to do control synthesis). These techniques differ in the amount ofinformation required to set up the control strategy, in the design process itself and inthe guarantees that they can offer [Astrom and Murray 2008].

The technique that requires the least amount of information is called synthetic de-sign. Synthetic design consists in taking pre-designed control blocks and combiningthem together. It often relies on the experience of the control specialist, that looks atexperiments performed on the system to be controlled and decides which blocks arenecessary. For example, when the output signal is very noisy, a block to be added couldbe a filter to reduce the noise and captures the original signal. Synthetic design usuallystarts with a basic control block and adds more to the system as more experiments areperformed. Although the information required to set up the control strategy is very low,the expertise necessary to effectively design and tune such systems is high and bothcontroller design experience and domain-specific knowledge are required. Despite notrequiring much information, the formal guarantees that this technique offers are lim-ited [Boyd et al. 1990]. This is due to the empirical nature of the controller design,where trial and error is applied and elements are added and removed. The main ob-stacle to formal guarantees is the interaction between the added elements, which ishard to predict a priori.

The second technique is a variation of the first one. It is based on the selection of acontroller structure and it is often called parameter optimization. The only differenceis that the choice of the controller parameters is often based on optimization strategiesor on analytical tuning methods [Astrom and Murray 2008].

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 7: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:7

The last technique is often referred as analytical design, and it is based on the solu-tion of an analytical problem. The amount of necessary information greatly increases,since a model of the controlled entity is required. Based on the equation-based modelthe controller synthesis selects a suitable equation to link the output variables to thecontrol variables. Depending on which analytical problem is used (the optimizationof some quantities, the tracking of a setpoint, the rejection of disturbances), differentguarantees are enforced with respect to the controlled system. In some cases, this pro-cess can be automatized and analytical control synthesis can sometimes be used withdomain-specific knowledge but without prior control expertise [Filieri et al. 2014]. Thisgenerally imposes limitations on the models to be used and on the obtained guaranteesand not all the control problems can be formulated in such a way that the solution canbe derived automatically.

In control theory, controllers can be divided into two main classes: time-based andevent-based controllers. The class of time-based controllers has been studied for manyuears, and can be further divided into continuous-time and discrete-time controllers.Given the nature of software systems, discrete-time controllers – where signals are notcontinuous, but discrete – seems to be a better fit for control of software systems [Diaoet al. 2005]. Nonetheless, there has been some effort in using continuous time models torepresent and control software entities [Tribastone 2013]. In this class, the controlleris an equation-based system, that acts at pre-specified time instants. When these timeinstants are not pre-specified, but depend on external inputs from the system, thecontroller belong to the second class and it is an event-based controller. To complementthese two classes, rule-based systems are often considered as a third, separate, type ofcontrollers. In control terms, rule-based – or knowledge-based – systems are a differentparadigm, that shares little with “classical” control. However, they have often beenexploited to build self-adaptive software, therefore we decided to include them in ourcontrol taxonomy.

2.5. Prove properties of the closed loop systemThe properties provided by control-theoretical designed adaptation strategies havebeen analyzed both from the software engineer perspective [Villegas et al. 2011; Fil-ieri et al. 2014] and from the control engineer perspective [Diao et al. 2006; Parekhet al. 2002]. Generally speaking, a control-based adaptation strategy would providethe properties it is designed for. For example, a control system that acts on the lengthof a key for an encryption algorithm is able to provide different level of security. Takingthe control perspective here, we focus on the translation of control properties into thecorresponding software engineering ones. A control system usually pursues four mainobjectives:

— Setpoint Tracking. The setpoint is a translation of the goals to be achieved. Forexample, the system can be considered responsive when its user-perceived latencyis below one second. The setpoint is here the value of one second for the maximumuser-perceived response time. In general, the self-adaptive system should be able toachieve the specified setpoint, whenever this setpoint is reachable. If the setpointis changed during the lifetime of the software, the controlled system should react tothis change and make sure that the new setpoint is reached. Whenever the setpointis not reachable the controller should make sure that the measured value y(k) is asclose as possible to the desired value y(k).

— Transient behavior. Control theory does not only guarantee that the setpoint isreached but also how this happens. The behavior of the system during the initial-ization phase or when an abrupt change happens is usually called the “transient of

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 8: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:8

the response”. For example, it is possible to enforce that the response of the systemdoes not oscillate around the setpoint, but is always below (or above) it.

— Robustness to inaccurate or delayed measurements. Oftentimes, in a real system,obtaining accurate and punctual measurements is very costly, for example becausethe system is split in several parts and information has to be aggregated to providea reliable measurement of the system status. The ability of a controlled system(in control terms a closed-loop system composed by a plant and its controller) tocope with non-accurate measurements or with data that is delayed in time is calledrobustness. The controller should behave correctly even when transient errors ordelayed data is provided to it.

— Disturbance rejection. In control terms a disturbance is everything that affect theclosed-loop system other than the action of the controller. For example, when a vir-tual machine provider places a machine belonging to a different software applica-tion onto the same physical machine as the target software, the performance ofthe controlled software may change due to interference [Govindan et al. 2011]. Dis-turbances should be properly rejected by the control system, in the sense that thecontrol variable should be correctly chosen to avoid any effect of this external in-terference on the goal. In the video encoding example, the controller should be ableto distinguish between a drastic change of scene (like the switch between athletesand commentators in a sport event) and a temporary slow down due to the switchbetween speakers in a conference recording. In the virtual machine example, thecontroller should be able to distinguish between a transient migration that is slow-ing down the software for a limited period of time and a persistent co-location thatrequires action to be taken to guarantee the goal satisfaction.

These high level objectives have counterparts in control terminology and their satis-faction can be mapped into the “by design” satisfaction of the following properties [Diaoet al. 2006; Stein 2003; Villegas et al. 2011]:

— Stability. A system is asymptotically stable when it tends to reach an equilibriumpoint, regardless of the initial conditions. This means that the system output con-verges to a specific value as time tends to infinity. This equilibrium point shouldideally be the specified setpoint value.

— Absence of overshooting. An overshoot occurs when the system exceeds the setpointbefore convergence. Controllers can be designed to avoid overshooting whenevernecessary. This could also avoid unnecessary costs (for example when the controlvariable is a certain number of virtual machines to be fired up for a specific softwareapplication).

— Guaranteed settling time. Settling time refers to the time required for the system toreach the stable equilibrium. The settling time can be guaranteed to be lower thana specific value when the controller is designed.

— Robustness. A robust control system converges to the setpoint despite the underly-ing model being imprecise. This is very important whenever disturbances have tobe rejected and the system has to make decisions with inaccurate measurements.

These four properties can be analytically guaranteed, based on the mathematicaldefinition of the control system and of the software. A self-adaptive system designedwith the aid of control theory should provide formal quantitative guarantees on itsconvergence, on the time to obtain the goal, and on its robustness in the face of errorsand noise.

In the case of discrete time control systems depicted in Figure 2, these properties canbe proved analytically. For more details, the reader is referred to [Filieri et al. 2015b]

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 9: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:9

that treats the matter with a software engineering example and [Astrom and Murray2008] for the control treatise.

Whenever the desired properties are not satisfied, one can step back in the designprocess and design a different controller, as discussed in Section 2.4. In some othercases, the model can be refined or a more comprehensive model can be used, as dis-cussed in Section 2.3. The use of a more complex model can capture a part of theself-adaptive software system that can be necessary to provide formal guarantees onthe time behavior of the system. Finally, in some cases, one can go back and add adifferent knob, as discussed in Section 2.2, to have better control over the goals of thesoftware system.

2.6. Implement and integrate the controllerThe next step after the controller design is its implementation and integration withthe system under control. Although this step appears to be quite straightforward, ithas also been called “the hard part” [Hellerstein 2009].

One of the main problem is that the implementation team (software engineers) oftenworks independently from the control team (control experts) [Liu et al. 2004]. Thetransition from control algorithms, which is typically in form of formulas or simulationresults, into software is a non-trivial process. It involves many ad-hoc decisions, notonly in the controller implementation itself (e.g., types of state variables), but also inthe implementation of the accompanying code that is responsible for the integration.This becomes particularly challenging in the case of remotely distributed systems.

Regardless of the specific implementation technology and the degree of automationin translating the control algorithm to runnable code, there are some recurring issuesin every controller implementation attempt. Namely:

— Actuator saturation occurs when the system under control is unable to follow theoutput from the controller. This happens when a control knob is fully engaged. Inclassical control, actuator saturation is a direct result of a physical constraint (e.g., avalve can only open up to a certain point and not more); in software systems control,similar constraints can be observed as well (e.g., the number of servers used by thesystem under control cannot exceed the number of available servers in the cluster).

— Integrator windup is a direct consequence of the actuator saturation and occurs incontrollers with integral terms (the I term in a PID controller). During actuator sat-uration, the integral control accumulates significant error, which causes a delay inerror tracking when the system under control recovers to the point that the actuatoris no longer saturated. Therefore, a conditional integrator (or integrator clamping)should be used to avoid the windup.

— Integrator preloading is a similar to integral windup and can occur (i) during sys-tem initialization, or (ii) when there is a significant change in the set point of thecontroller. In both cases, the integral term should be preloaded with a value thatwould smooth the setpoint change.

Apart from dealing with the common problems presented above, the controller im-plementation has to be robust. Common mechanisms to achieve robustness are (i) de-termination and disregard of invalid input signals (e.g., out-of-bound values), (ii) ac-count for actuator delays by preventing duplicate control actions, and (iii) gracefuldegradation in case of invalidation of design-time assumptions (e.g., when the actualcomputation and communication jitter is larger than assumed by controller designers).

Controller Integration. After a controller has been implemented, it has to be inte-grated with the rest of the system. This includes wiring all the responsible compo-

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 10: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:10

nents together and ensuring that the sensor data are consistently collected across allthe sources and that adaptation actions are correctly coordinated.

When integrating a controller into a system, there are two basic approaches to follow,with respect to the separation of concerns between the controller and the system undercontrol [Salehie and Tahvildari 2009]: (i) intertwine the control logic with the systemunder control – internal control, and (ii) externalize the control logic into a “controller”component and use sensor and actuator probes (if available) to connect the controllerand the system under control – external control.

The external control provides a clear separation of concerns between the applicationand adaptation logic. Its advantages with respect to maintainability, substitutability,and reuse of the adaptation engine and associated processes makes it the preferredengineering choice [Salehie and Tahvildari 2009]. It also allows building adaptationon top of legacy systems where the source code might be unavailable. The main draw-back of external control is however in assuming that the target system can provide(or can be instrumented to provide) all the necessary endpoints for its observationand consequent modification. This assumption seems reasonable since many systemsalready provide some interfaces (e.g., tools, services, APIs) for their observation andadjustment [Garlan et al. 2004b], or could be instrumented to provide them, e.g., byusing aspect-oriented approaches. There is also a potential performance penalty as aconsequence of using external interface or running some extra components and con-nectors which cannot be tolerated in some resource- constrained environments (e.g., inembedded devices where memory footprint and transmission delays matters).

The separation of concerns in the external approach also makes it possible to providemore systematic ways for control integration by leveraging model-driven engineeringtechniques and domain-specific modeling [Vogel and Giese 2014; Krikava et al. 2014].Essentially, these solutions raise the level of abstraction on which the feedback controlis described and therefore make it amenable to automated analysis as well as completeintegration code synthesis.

2.7. Test and validate the systemThe next step involves the test and validation of the controller. This can be dividedinto two broad categories. On one hand, one needs to test the controller itself andcheck that it does the correct thing. A part of this is already done with proving theproperties of closed loop systems. However, this does not test the controller implemen-tation. The second part is the verification of the controller together with the systemunder control, to understand if the controller can deliver the promised properties. Thisshould be true, in general, but there might have been model discrepancies that havebeen overlooked during the design process, therefore validation is needed despite theanalytical guarantees given by control theory.

Static analysis and verification techniques can be used to assess both the confor-mance of the controller code to its intended behavior and the absence of numericalerrors due to the specificity of different programming languages and execution archi-tectures. For example, modern SMT tools can be used at compile time to verify theoccurrence of numerical problems and automatically provide fixes guaranteeing thefinal results of the procedures to be correct up to a target precision [Darulova andKuncak 2014].

Also, there is a growing area of verification theories and tools focusing on real anal-ysis and differential equations. The most recent advancements include satisfiabilitymodulo ODEs and hybrid model checking. The former can be used to verify if a systemdescribed by means of a set of differential equations can reach certain desirable (or un-desirable) states, within a set finite accuracy [Gao et al. 2013]. In terms of scalability,SMT approaches over ODEs have been proved to scale up to hundreds of differential

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 11: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:11

equations. Hybrid model checking is instead focused on the verification of propertiesfor hybrid systems, which can in general be defined as finite automata equipped withvariables that evolve continuously according to dynamical laws over time. These for-malisms are for example useful to match different dynamical behavior of a system withits current configuration, and can be valuable especially to study and verify switchingcontrollers and the co-existence of discrete-event and equation-based ones. Currenthybrid model checkers are usually limited to linear differential equations [Henzingeret al. 1997; Franzle and Herde 2007].

Once verified the controller by itself, it is necessary to verify the controller imple-mentation, together with the system under control. This can be done by means ofextensive experiments. Some types of analysis are based on systematic testing. Onecommon way to validate a controller implementation and its system under control isto show statistical evidence, for example, using cumulative distribution functions, asdone in [Klein et al. 2014]. In this and similar work, some experiments are conducted,in a lower number with respect to the rigorous analysis previously mentioned. Basedon the results of these experiments, one can compute the empirical probability distri-butions of the goals.

It is also possible to use tools like the scenario theory [Campi and Garatti 2011;Papadopoulos et al. 2016] to provide probabilistic guarantees on the behavior of thecontrolled system together with the control strategy. The performance evaluation canbe formulated as a chance constrained optimization problem and an approximate so-lution can be obtained by means of the scenario theory. If this approach is taken, theperformance of the software system can be guaranteed to be in specific bounds with agiven probability.

3. CONTROLLERS TAXONOMYThis section discusses a selection of different control strategies and concepts that canbe applied for the solution of software adaptation problems. While a comprehensivesurvey of all the techniques established by control theory is beyond the scope of thispaper, a discussion of the principal classes and dimensions of control solutions is pro-vided as a reference for casting common software adaptation problems in the frame-work and terminology of control. We first start with a discussion on the control looptypes, presented in Section 3.1. We then delve into the implementation of differenttypes of controllers. Section 3.2 discusses the Time-Based controllers, Section 3.3 theEvent-Based ones, and Section 3.4 the Knowledge-Based ones. Finally, Section 3.5 dis-cusses uncertainty management strategies that can complement the mentioned tech-niques.

3.1. Loop typeLoop-based controllers are defined by the way in which they incorporate feedback andsystem measurement. We identify four main types.

— Open-loop control: The control signal is based on a mathematical model, withoutany measurement. This scheme is used only with perfect knowledge of the systemand outside environment – it cannot react to unforeseen disturbances.

— Feedforward control: This approach is similar to open-loop, but includes measure-ments of disturbances. The controller rejects these disturbances’ effects on the sys-tem. Perfect knowledge of the system model is assumed, hence, unmeasurable dis-turbances cannot be rejected.

— Feedback control: This is the most-used structure for the control loop. The controlinput is computed as a function of the difference between measured system behaviorand desired behavior. This scheme is usually able to handle unforeseen disturbances

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 12: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:12

and uncertainty in the system. In addition, to rejecting unmeasured disturbances,feedback control can also significantly change the behavior of the controlled system;e.g., stabilizing an (open-loop) unstable system.

— Feedback and feedforward control: The control is computed as in the feedback case,but disturbance measurement is also incorporated to actively reduce disturbances’effects. The feedback and forward controllers must be designed together.

More advanced control schemes are discussed in the literature [Scattolini 2009] andhave been used to control computing systems [Patikirikorala et al. 2012].

3.2. Time-Based Control techniquesThis section reviews the most important time-based control techniques, providinghints on when a particular technique should be used. The focus is mainly on discrete-time control techniques, since these are the most suited for being applied for designingself-adaptive software systems.

3.2.1. Classical feedback control. One of the simplest controllers is the bang-bang con-troller (or on-off controller) [Artstein 1980]. The control signal can assume only twovalues; e.g., {−1, 1} or {on,off}. Due to its simplicity, it was widely studied especially inoptimal control problems [Hermes and Lasalle 1969].

Pole placement [Wittenmark et al. 2002] is a special case of state-feedback controlthat assumes all system states are measured and used to compute the control sig-nal. Since the overall state vector cannot always be measured, this technique is typi-cally combined with measurements used to reconstruct the states’ values. To use thistechnique, a state-space system model must be available and some modifications arerequired in order to account for possible disturbances and model uncertainties.

A special case of pole placement design is so called deadbeat control [Wittenmarket al. 2002]. This strategy drives the controlled output to the desired value in at mostn time units, where n is the order of the controlled process.

The most common controller, however, is the PID controller [Astrom and Hagglund2006]. This type of controller covers about 90% [Wittenmark et al. 2002] of the in-dustrial applications, due to its simplicity and flexibility. It is based on a very simpleprinciple, which is to compute the control input u(k) (see Figure 2) according to the law

u(k) = u(k − 1) + K∆e(k)︸ ︷︷ ︸proportional

+Kh

Tie(k)︸ ︷︷ ︸

integral

+KTdh

[∆e(k)−∆e(k − 1)]︸ ︷︷ ︸derivative

(1)

where e(k) = y(k) − y(k) is the error between the desired and the actual behavior ofthe system, ∆e(k) = e(k)− e(k − 1) is the variation of the error, h is the sampling timeof the controller, and K, Ti and Td are the three parameters of the PID controller. Notethat (1) is a PID control law in velocity form, cf. [Astrom and Hagglund 2006]. Thethree terms in the control law (1) serve different purposes:.

— A higher proportional gain K generally increases the response time of the closed-loop system, but make it unstable.

— The integral term can mitigate steady-state deviations from the set point that apurely proportional controller cannot handle, but can introduce overshoot in thesystem response.

— A derivative term improves response time and stability, but may amplify the influ-ence of measurement noise.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 13: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:13

PID controllers are popular as they do not require an explicit system model, but can betuned on the basis of experiments and established heuristics [Astrom and Hagglund2006]. On the other hand, PID control does not generalize easily to MIMO-systems.

An alternative technique is loop-shaping. The control designer decides the closed-loop transfer function’s shape and designs the controller accordingly. The desiredclosed-loop transfer function is typically related to some time-domain specifications,e.g. the required settling time, or the maximum disturbance amplification.

3.2.2. Optimal control. In optimal control, the control value is obtained so as to min-imize a cost function, possibly subject to some constraints. Typically, the objective isto maximize control performance, given prescribed guarantees [Morari and Zafiriou1989; Zhou et al. 1996]. Whenever the cost function is a quadratic function, and theconstraints contain linear first-order dynamic constraints, the problem can be classi-fied as a Linear Quadratic (LQ) optimal control problem. A special case is the LinearQuadratic Regulator (LQR) [Skogestad and Postlethwaite 2007].

A particularly successful heuristic for optimal control under constraints is ModelPredictive Control (MPC) [Maciejowski 2002; Camacho and Alba 2013]. MPC predictsthe future behavior from the current system state under a particular control actionand selects the input sequence that minimizes the chosen cost function. Only the firststep of that input sequence is applied and at the next time step the new system stateis determined and the process repeated. This control strategy is also called RecedingHorizon Control (RHC).

Such techniques are widely used in industry, especially when physical constraintsmust be enforced. A good system model is required for MPC. Computational resourcescan be an issue, as solving the optimization problem for larger (or nonlinear) systemscan take significant time. Variants exist which deal with disturbances and model un-certainties in different ways [Bemporad and Morari 1999; Goodwin et al. 2014].

Another class of optimal controllers is the H∞ control [Skogestad and Postlethwaite2007]. This class is important in many applications in which guarantees on the obtain-able performance are crucial. While mature tools exist that can automatically synthe-size H∞ controllers, mathematical expertise is required to understand the advantagesand limitations of H∞ control.

3.2.3. Adaptive control. Adaptive control is a class of techniques that allows a controllerto adapt its behavior to time-varying or uncertain structural properties [Astrom andWittenmark 2013]. Adaptive controllers modify the control law at run-time to adapt tothe discrepancies between the expected and the actual system behavior. Usually, adap-tation mechanisms are built on top of existing controllers, and modify the controller’sparameters or select the most effective controller form a set of possibly structurally dif-ferent ones, depending on the detected operation point [Astrom and Wittenmark 2013].The control task is, however, still performed by the controller. Most of the approachesusing daptive control for self-adaptive software performs parametric adaptation, i.e.the structure of the controller is not changed but only the values of its parametersaccording to an adaptation policy.

An adaptive controller combines an online parameter or system state estimatorwith a control law designed from the prior knowledge about the system. The waythe estimates are combined with the control law gives rise to different approaches.Gain scheduling estimates the system’s current operating region and then changesthe controller’s parameters based on that estimate. Model Identification Adaptive Con-trollers (MIACs) using online identification techniques to estimate a good system modeland modify the controller accordingly. Model Reference Adaptive Controllers (MRACs)

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 14: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:14

change the controller’s parameters according to the discrepancy between the expectedand actual system behavior with respect to a reference model.

The term “adaptive control” usually refers to a specific class of adjustment mecha-nisms enabling controllers to face unexpected behavior. This is very different, and usu-ally more complex with respect to many “adaptation” policies that are used in softwaresystems. Unfortunately, the two terms are very close but distinct, making communica-tion between the two communities difficult.

3.2.4. More complex techniques. Most robust control methods typically deal with un-certainties in a “worst-case” sense, providing strong guarantees under fairly strongassumptions. Stochastic control, on the other hand, tries to minimize the influenceof uncertainties, providing probabilistic guarantees on the satisfaction of its require-ments (e.g., the system will satisfy achieve its goals with a probability larger than acertain threshold) [Kumar and Varaiya 1986]. The benefit is that assumptions on theuncertainties are generally much weaker, e.g., instead of assuming bounded distur-bance signals only an assumption on the probability distribution is necessary.

The control techniques outlined in the previous section have mostly been developedfor linear systems because of the complete mathematical theory existing for this modelclass. Since most real systems are non-linear, nonlinear control is an active field in re-search and application [Khalil 2015]. Since nonlinearities can take so many differentforms, nonlinear control methods are often tailored to specific model structures. Manyof the methods designed for linear systems have also been applied to nonlinear sys-tems, e.g., nonlinear MPC [Grune and Pannek 2011]. PID controllers in particular areoften applied to nonlinear system because of their easy tuning and good robustnessproperties. Additionally, many nonlinear systems only operate under certain condi-tions, often allowing to linearize the nonlinear dynamics around such an operatingpoint and working with the resulting linear model.

A particular class of nonlinear models that may be especially relevant in the contextof software systems are so-called hybrid systems [Goebel et al. 2012; Labinaz et al.1997]. They combine continuous state dynamics on a continuous- or discrete-time scale,e.g., number of requests in a queue, with intermittent discrete events, e.g., availabilityof more computing resources. Analysis and control of hybrid systems have to reconcilethe continuous state dynamics with abrupt discrete changes in the system’s state orbehavior. Despite hybrid systems are catching a growing interest for modeling softwarebehaviors [Zhan et al. 2013], verifying the properties of (controlled) hybrid systems isparticularly challenging [Alur 2011], with restrictive decidability results for the moregeneral classes of hybrid models [Henzinger et al. 1998]. A special case of a hybridcontroller is the so-called switching controller [Liberzon 2003]. The idea of switchingcontrol is similar to adaptive control, but instead of modifying the parameters of thecontrollers, a switching policy decides how to switch between controllers of differentstructure. In software engineering terms, it is a structural adaptation that changesthe control component itself. For a broader overview of the current state of the arton hybrid control, we refer the interested reader to [Lunze and Lamnabhi-Lagarrigue2009].

3.3. Event-based controllersEvent-based controllers react to specific events by making decisions about the currentsystem. This class includes controllers entirely designed in the event space, for exam-ple logic controllers, and controllers that are event-based extensions of a continuous-or discrete-time control. Here, we briefly analyze this last class and then discuss “pureevent” strategies.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 15: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:15

In the control domain, there is a distinction between event-triggered and self-triggered control strategies [Heemels et al. 2012; Astrom 2008; Lunze and Lehmann2010]. An initial effort was devoted to studying what happens when a controller de-signed to be executed periodically is instead called when a certain event happens, forexample when a performance drop is experienced. More recently, the control commu-nity has put some effort in the systematic design of event-based implementations offeedback control laws [Tabuada 2007; Heemels et al. 2008; Leva and Papadopoulos2013]. Event-triggered controllers are based on constant monitoring of some trigger-ing conditions that could lead to the execution of the controller code. The controllercan be designed with any of the mentioned strategies. On the contrary, self-triggeredcontrollers are not based on constant monitor, but the controller, before terminating itsexecutions, programs a new wakeup time [Camacho et al. 2010; Leva and Papadopou-los 2013]. Whenever the monitor is extremely expensive or the verification of the trig-gering condition is non-trivial, self-triggered control strategies can offer a better alter-native to event-based ones. However, it must be noted that the design of the controlleris often conducted with the same techniques used for continuous- and discrete-timecontrol strategies (see Section 3.2).

In the computer science domain, the problem of event-based controller synthesishas been defined [Ramadge and Wonham 1989; Pnueli and Rosner 1989]. Event-basedcontroller synthesis can be abstractly defined as follows: given a model of the assumedbehaviour of the environment (E) and a system goal (G), controller synthesis producesan operational behaviour model for a componentM that when executing in an environ-ment consistent with the assumptions results in a system that is guaranteed to satisfythe goal – i.e., E‖M |= G.

The main motivation for event-based controllers in self-adaptive systems is the needfor high-level adaptation strategies and decision-level behaviour plans. Thus, event-based controllers, in contrast to discrete-time ones, entail complex strategies to achievehigh-level complex goals (e.g. architectural adaptation) that require non-trivial com-bination of actions. Further, event-based controllers provide formal guarantees thatensure goals are satisfied.

In order to apply event-based controller synthesis it is required to have formal spec-ification of the environment assumptions and system goals. Temporal Logics (e.g.,LTL [Pnueli 1977]) are a widely adopted formalism to specify environment assump-tions and system goals in computer science, and more specifically in the Software En-gineering community. Temporal logics not only provide a framework that formalisesthe entailment (|=) operator, central to the controller synthesis problem, but also en-able for declarative and precise descriptions of high-level complex goals. Many haveaddressed the problem of synthesing a controller for safety goals (e.g., [Ramadge andWonham 1989]), while others have introduced approaches for general LTL formulas(e.g., [Pnueli and Rosner 1989]). Here we report on a number of approaches that incor-porate (and thus guarantee) different subsets of LTL as system goals.

A number of architectural approaches for self-adaptive systems [Kramer and Magee2007; Garlan et al. 2004a; Dashofy et al. 2002; Batista et al. 2005; Oreizy et al. 1999;Kang and Garlan 2014; Inverardi and Tivoli 2003; Braberman et al. 2015] have beenproposed. At the heart of many of such adaptation techniques, there is a componentcapable of designing at run-time a strategy for adapting to the changes in the envi-ronment, system and requirements (e.g., [Braberman et al. 2015]). Interestingly, nomechanisms for generating adaptation strategies is prescribed. In fact, depending onthe knowledge on the environment and the type of goal, a wide range of event-basedcontroller synthesis techniques could be applied.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 16: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:16

There are a large variety of controller synthesis techniques [Pnueli and Rosner 1989]that guarantee the satisfaction of safety and even liveness [Piterman et al. 2006;D’ippolito et al. 2013] requirements within the constraints enforced by the problemdomain, the capabilities offered by the system and under fairness and progress as-sumption on the controller’s environment.

Traditional techniques for controller synthesis are Boolean in the sense that a con-troller satisfies a set of goals, or it does not. This two-valued view is limiting as, ingeneral, there are multiple ways to satisfy a set of goals, leading to several levels ofsatisfaction with respect to certain preference notions – i.e. quality attributes. Typ-ically, such quality attributes are modelled by introducing a quantitative aspect tothe system specification, imposing a preference order on the controllers that satisfythe qualitative part of the specification (e.g. [Chatterjee et al. 2015]). Thus, the event-based synthesis procedure has both qualitative and quantitative aspects, the formermodelling goals the controller must satisfy and the latter modelling, in detail, the prop-erties of the dynamical behaviour exhibited by the system that are to be, for example,maximized.

In some cases the environment behaviour is stochastic. Event-based controller syn-thesis techniques for such models are normally based in the notion of stochasticgames [Chatterjee et al. 2004]. The problem of stochastic synthesis has qualitativeand quantitative components. The qualitative component is to answer if a controllercan guarantee the satisfaction of a goal, or if it has a positive probability of achievingthe goal [Chatterjee et al. 2003]. The quantitative component, on the other hand, isto answer the exact value of the probability a controller has to fulfil the goals [Chat-terjee et al. 2015]. Stochastic control problems have various forms depending on thetype of environment under which the controller must interact with. When the con-troller is meant to interact with a purely stochastic environment, 11/2 player gamesare used. Such games can be expressed with Markov Decision Processes (MDPs) [Filarand Vrieze 1996; Puterman 1994]. In cases where we model the unknown behaviour asstochastic (e.g. failures) and known properties of the environment as adversarial 21/2

player games are required. In such games we have one player (the controller) playingagainst another one (the environment) and both interact with a third player who playsaccording to a certain probabilistic distribution.

3.4. Knowledge-based controllersConventional controllers (e.g., equation-based or adaptive controllers) can be designedfollowing established mathematically grounded processes, guaranteeing the effective-ness of the controllers, including their stability, settling time, or robustness. However,a correct design and the understanding of the underlying theory may require specificmathematical knowledge, not mastered by most software engineering. Furthermore,modeling and controlling systems exhibiting particularly complex non-linear behav-iors may require a good degree of expertise, since standard controller design processesseldom apply to them off-the-shelf.

Knowledge-based controllers allows overcoming the difficulties of formalizing both asystem behavior and its control law. Indeed, a system behavior, including its knobs, itsoutputs, (optionally) its internal state, and the effects of each possible control actionare represented in a form suitable for formal deduction and inference by a reason-ing engine. The reasoning engine is in charge then of deciding the most appropriatecontrol actions based on the knowledge about the system augmented with monitoringinformation about the current state of execution. Due to the higher level of abstractionand often the lack of knowledge about the internal machinery of the reasoning engine,proving certain properties of these controllers might be harder [Levine 2010].

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 17: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:17

There are several possible choices for the reasoning engine, and consequently for therepresentation of the knowledge base. In the following we will briefly recall the basicprinciples behind two popular approaches: fuzzy rule-based reasoning and case-basedreasoning.

3.4.1. Fuzzy rule-based control. Fuzzy control is grounded on fuzzy logic, which is, in itssimplest form, an extension of propositional logic where each variable may have a de-gree of truth ranging in the interval [0, 1]∩R [Yager and Zadeh 2012]. The reasoning inthis framework is based on Fuzzy inference. Fuzzy inference is the process of mappinga set of control inputs to a set of control outputs through a set of fuzzy rules. The mainapplication of fuzzy controllers is for types of problems that cannot be represented byexplicit mathematical models due to high non-linearity of the system. Instead, the po-tential of fuzzy logic lies in its capacity to approximate that non-linearity by knowledgein a similar way to the human perception and reasoning. The explicit knowledge base(encompassing the user defined fuzzy rules) component is one of the unique aspects ofsuch type of controllers. Instead of sharp switching between modes based on thresh-olds, control output changes smoothly from different region of behavior depending onthe dominant rules.

Fuzzy controllers has been applied in the context of virtulized resource management[Xu et al. 2007; Rao et al. 2011; Lama and Zhou 2013] and cloud computing [Wanget al. 2015; Jamshidi et al. 2014]. For instance, the inputs to such controllers mayinclude the workload level (w) and response time (rt) and the output may be the scalingaction (sa) in terms of increment (or decrement) in the number of VMs. The design of afuzzy controller, in general, involves the following tasks: (1) defining the fuzzy sets andmembership functions of the input signals. (2) defining the rule base which determinesthe behavior of the controller in terms of control actions using the linguistic variablesdefined in the previous task. The very first step in the design process is to partitionthe state space of each input variable into various fuzzy sets through membershipfunctions. Each fuzzy set associated with a linguistic term such as ”low” or ”high”. Themembership function, denoted by µy(x), quantifies the degree of membership of aninput signal x to the fuzzy set y (cf. Figure 3). As shown, three fuzzy sets have beendefined for the workload to achieve a reasonable granularity in the input space whilekeeping the number of states small to reduce the set of rules in the knowledge base.

The next step consists of defining the inference machinery for the controller. Herewe need to define elasticity policies in terms of rules: ”IF (w is high) AND (rt is bad)THEN (sa = +2)”, where the output function is a constant value. Note that dependingon the problem at hand this can be any finite discrete set of actions. For the definitionof the functions in the rule consequents, the knowledge and experience of a humanexpert is generally used. In the situations where no a priori knowledge for definingsuch rules is assumed, a learning mechanism can be instead adopted [Lama and Zhou2013; Jamshidi et al. 2016].

Once the fuzzy controller is designed, the execution of the controller is comprised ofthree steps: (i) fuzzification of the inputs, (ii) fuzzy reasoning, and (iii) defuzzificationof the output. Fuzzifier projects the crisp data onto fuzzy information using member-ship functions. Fuzzy engine reasons on information based on a set of fuzzy rules andderives fuzzy actions. Defuzzifier reverts the results back to crisp mode and activatesan adaptation action. The output is calculated as a weighted average:

y(x) =

N∑i=1

µi(x)× ai, (2)

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 18: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:18

α β γ δ

Low Medium High

0

1

Workload

Fig. 3. Fuzzy membership functions for auto-scaling variables.

where N is the number of rules, µi(x) is the firing degree of the rule i for the inputsignal x and ai is the consequent function for the same rule.

3.4.2. Case-based reasoning. Case-based reasoning (CBR) is based on the recognitionof similar cases from previous experience [Qian et al. 2014]. The reasoning processcomprises of the following steps:

a) Matching the new situation to prior casesb) Identifying the closest matchesc) Combining the closest matches to find a solutiond) Storing the new case and the solution in the case base

Given an initial set of cases, a CBR grows its experience while operating at runtime,taking faster decision for known cases by reusing “what have worked before” [Lambert2001] instead of keeping re-computing complex interpolations among the closest cases.

3.4.3. Learning knowledge-based controllers. The lack of knowledge about the system orsome of its parts during design may limit the effectiveness of of a knowledge-basedcontroller. For example, (de)fuzzification mechanisms can be re-tuned or new inferencerules can be added to the controller after additional knowledge is gathered at runtime.Machine learning techniques have been applied for this purpose. A notable exampleis the use of reinforcement learning to automatically building a rule set for a fuzzycontroller based on the evaluation of explorative adaptation action selections [Tesauro2007; Jamshidi et al. 2016; Lama and Zhou 2013].

3.5. Dealing with uncertainty in control strategies for adaptationIndependently of the particular model-based control technique that is adopted, the firsttask in controller design is modeling the system to be controlled. This system, however,may be very complex and its dynamics may not be completely understood. Developingaccurate system models over an operating range is a challenging task [Astrom andMurray 2008], especially for computing systems [Hellerstein et al. 2004; Maggio et al.2012; Leva et al. 2013; Papadopoulos et al. 2014]. Even if a detailed mathematicalmodel is available, it may be complex and making the controller design challengingand computationally expensive [Morari and Zafiriou 1989]. Of course, the more com-plete and accurate the model, the more robust the controller is. Thus, there is tensionbetween the complexity of the model and the controller’s robustness to unmodeled oruncertain characteristics.

Besides the challenge of determining an appropriate level of abstraction to balancethe simplicity of the model and the amount of information available to the controller,engineering self-adaptive software requires a peculiar attention to modeling of uncer-tain phenomena. Uncertainty can be at the same time the driver for and the outcomeof adding self-adaptation capabilities to a system. The research community devolvedextensive efforts to studying and modeling uncertainty in self-adaptive software (see

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 19: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:19

e.g., [Esfahani and Malek 2013; Esfahani et al. 2011; Cheng and Garlan 2007; Ramirezet al. 2012a; Ramirez et al. 2012b; Giese et al. 2014; Bencomo and Belaggoun 2014;Alessia Knauss et al. 2016; Cheng et al. 2009]) and elicited three main sources of un-certainty:

(1) requirements can change over time and have different priorities for differentusers [Baresi et al. 2006]; specifications are incomplete, and designs can intention-ally be left open-ended [Letier et al. 2014].

(2) software interacting with physical systems, including unreliable hardware infras-tructures, has possibly to deal with an epistemic uncertainty intrinsic in some phys-ical phenomena (e.g., a GPS localization device outputs a distribution over likelypositions) [Zhang et al. 2016].

(3) software deployed in the wild has to tackle interactions with a variety of users,whose behavior is hardly predictable and often changes over time [Camara et al.2015].

On the other hand, the very addition of adaptation capabilities to software, if not prop-erly designed, may introduce additional uncertainty about its behavior. This may com-promise the overall dependability of the self-adaptive systems [Calinescu et al. 2012]and is the primary driver for developing principled design techniques and quality as-surance practices. We so far described the mathematical principles control theory of-fers to design reliable-by-design controllers.

In the remainder of this section, we will first discuss three common ways for dealingwith uncertainty in control-theoretical software adaptation, and then elicit a set oftasks software engineers can be required to perform when dealing with uncertainty.

Dealing with uncertainty in control. There are different ways to deal with uncer-tainty, including adaptive and robust control, probability theory, and fuzzy logic.

In classical control theory, the typical approach is provide a model, possibly prob-abilistic, of the uncertainty, and to adopt a control technique that is conceived to berobust against the modeled uncertainty. Possible approaches are then to deal with theuncertainty following a worst-case scenario approach [Campi et al. 2009], e.g. this isthe case of H∞ control [Skogestad and Postlethwaite 2007], or to adapt the controller’sbehavior online, according to measured quantities on the system, e.g. adaptive con-trol [Astrom and Wittenmark 2013].

When the uncertainty can be restricted to a known interval one can use also interval-Markov models. Recently, several techniques have been developed [Chatterjee et al.2008; Benedikt et al. 2013; Chen et al. 2013; Puggelli et al. 2013]. Typically, thesemethods work by determining the extremal values a probabilistic parameter can take.When the range of uncertainty is not known, one can opt parametric-Markov models,that construct a symbolic model that can later be evaluated with specific probabilities.Recent work includes methods from parametric model checking [Hahn et al. 2010;Filieri et al. 2011a; Dehnert et al. 2015].

Finally, fuzzy logic controllers deal with uncertainty quantifying the ambiguity dueto the unmodeled underlying phenomena [Jamshidi et al. 2014]. Controllers based onfuzzy theory are called fuzzy logic controllers [Jantzen 2013]. They are particularlysuited for nonlinear control and non-probabilistic uncertainty [Mendel and Wu 2010].

Common tasks in dealing with uncertainty. The fact that many different ap-proaches exist to deal with uncertainty is a figure of how difficult this problem is.In fact, dealing with uncertainty raises a number of different challenges:

— Model Selection To enable tractable analysis, most mathematical models are in-herently approximate. While many phenomena can be modeled using probability

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 20: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:20

theory or analyzed using fuzzy logic, software designers must select appropriatemodels.

— Knowledge Evolution Typically adaptive software systems use some sort of adapta-tion knowledge either in terms of analytical model or explicit adaptation rule forcontrolling the underlying software systems. However, the model may drift at run-time due to the changes in the assumptions on which the model is made. Somerecent work [Abbas et al. 2011; Jamshidi et al. 2016] investigated this challenge.

— Reasoning about lack of domain knowledge Explicitly adding uncertainty to soft-ware is an application-specific effort. Generic techniques for trading applicationspeedup and quality are still in their early stages. There are some solution thataddress uncertainty by learning parameters online. For instance, reinforcementlearning can explore the environment and pass knowledge to a controller for moreeffective decision making [Hoffmann 2015].

— Reasoning about Multiple Sources of Uncertainty Even when individual uncertaintysources are modeled using a single underlying theory (e.g., probability theory), it isnot clear how to combine different models. characterize the uncertainty of the com-putation. Currently, combining multiple uncertainty models in a single applicationrequires careful, application-specific approaches. General methods for combininguncertainty models is another key challenge.

— System Adaptation Given accuracy characterization of the approximate programs,runtime systems can use this information to accuracy, performance, and energytargets [Baek and Chilimbi 2010; Hoffmann et al. 2011; Samadi et al. 2014]. How-ever, adaptively controlling accuracy in a systematic manner is still often an ad-hocprocess. The work on more general methodologies is ongoing [Filieri et al. 2014;Hoffmann 2014].

4. SOFTWARE ENGINEERING FOR CONTROL THEORYThe mathematical foundation of control theory has shaped its practical development:the standard practice in the design and implementation of control systems is to devisean ad-hoc solution mathematically for each problem instance [Levine 2010] disregard-ing, in a first place, implementation concerns. Whereas the mathematical derivation oftailored controllers for complex problems may still require human expertise, softwareengineering established principles and design techniques that are carving their wayin the design of control systems. In this section, we will sketch some of the directions

software engineering is shaping the design and implementation of software con-trollers. These results can as well ease the integration of control in self-adaptive soft-ware.Domain specific languages. The established approach to controller design and imple-mentation is the use of domain-specific languages. Development suites like Matlaband Simulink, Modelica, Ptolemy II [Eker et al. 2003], as well as less popular oneslike ACTRESS [Krikava et al. 2014] or EUREMA [Vogel and Giese 2014], providedomain-specific constructs to ease the formalization of dynamic models and automati-cally generate executable code targeted at different hardware infrastructures. Despitethe success of these tools, the evolution of both sophisticated control techniques andexecution platforms is placing unprecedented challenges. As an example, model pre-dictive control requires the solution of complex optimization problems during runtime.Programmable and specialized hardware presents new opportunities for automaticallytransforming and compiling domain specific languages into platform specific ones,unleashing computational power beyond the reach of general purpose architectures.While preliminary ad-hoc solutions exist (e.g., [Kerrigan 2014; Hartley et al. 2014]),their generalization is still a concern for researchers in software engineering and pro-gramming languages [Dubach et al. 2012]. Just in time compilation, generative pro-

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 21: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:21

gramming, and modular staging [Rompf and Odersky 2010] are growing importancein software engineering and can change the shape of controller development, as earlyresults demonstrate [Ofenbeck et al. 2013; Kong et al. 2013].Design patterns. Control theory developed common solution patterns for several com-plex problems, usually involving multiple controllers. This goes under the term of con-trol allocation strategies [Levine 2010], and such strategies have clear semantic sim-ilarities to design patterns in software engineering. The growing scale and complex-ity of industrial control [Vyatkin 2013] are calling for control engineers to face newsoftware engineering problems and to start exploring specific design patterns [Sanzand Zalewski 2003; Scattolini 2009]. To further improve in this direction, a deepercross-fertilization between software and control engineering is needed. On the otherhand, robotics and cyber-physical systems are offering a natural playground for thisencounter [Brugali 2007; Rajkumar et al. 2010].Verification and validation of controller implementation. Although control theory de-veloped techniques to validate system models experimentally and in turn the theo-retical effectiveness of their controllers, implementation may differ from its design.Software engineering is playing a role in both producing correct-by-construction im-plementations from the mathematical description of controllers [Kowshik et al. 2002;Darulova and Kuncak 2014; Cai 2002] and in the definition of specialized testing andverification procedures [Darulova and Kuncak 2013; Ismail et al. 2015; Matinnejadet al. 2016; Zuliani et al. 2010; Villegas et al. 2011; Camara et al. 2013].Implementation of distributed control. Networked control systems are rapidly develop-ing thanks to advances in both industrial automation and robotics [Yang 2006]. Soft-ware engineering developments in the fields of actor models and reactive programmingare supporting their implementation [Liu et al. 2004; Muscholl 2015; Delaval et al.2013]. However, to the best of our knowledge, no languages specific for distributedcontrol have been developed os far.

5. CONCLUSIONS AND OPEN RESEARCH CHALLENGESSelf-adaptation mechanisms for software systems require a formal grounding to assuretheir dependability and effectiveness. Control theory showed promising results for pro-viding such grounding in a variety of situations. In this paper, we elicited and discusseda set of control strategies software engineering can glean from to define dependablesoftware adaptation techniques. Furthermore, we presented control theoretical resultsunder the light of common software development activities, from requirements formal-ization to implementation and verification, to show how an understanding of each ofthose activity can benefit from the understanding of control concepts.

Despite not all software adaptation problems find a straightforward casting intothe control domain, the concepts and techniques proposed in this paper can representa core toolkit enriching the corpus of established solutions for self-adaptive softwaredesign.

To conclude this work, in the reminder of this section, we elicit a first set of openresearch challenges in developing controllable software. We divide these into two cate-gories: challenges for software engineers and challenges for control engineers. For eachcategory, we first describe the overarching goal and then some key objectives that mustbe met to achieve this goal.

5.1. The software engineering perspectiveSoftware engineers must create controllable software; i.e. make control a first-classdesign concern [Muller et al. 2008; Brun et al. 2009]. Software engineers will benefitas controllable software permits formal analysis of its dynamic behavior. For example,engineers who design controllable software can then mathematically reason about its

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 22: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:22

performance, energy, security, reliability, or other non-functional requirements – de-spite the fact that the software may be deployed into an unpredictable and fluctuatingenvironment.

The need for adaptive software that can change its own behavior in a changing en-vironment has long been recognized [Cheng et al. 2009]. Controllable software is asubset of adaptive software. While adaptive software can change its behavior, control-lable software changes its behavior according to a set of well-founded mathematicalmodels whose dynamics can be formally analyzed. This section presents several objec-tives that will help achieve the goal of creating controllable software.Control design patterns. To make control techniques directly usable to software en-gineers – without requiring a deep knowledge of control theory – the community canwork on identifying design patterns specific to control. Design patterns allow for thedefinition of reusable solutions and easy the communication of design choices; cap-turing the similarities within classes of software adaptation problems solvable withcontrol techniques would allow a faster adoption of control since early design stages. Atop-down approach to the definition of control design patterns would first identify con-trollers and then determine the applications to which they are best suited (e.g., [Filieriet al. 2014; Filieri et al. 2015a]). A bottom-up approach would first find applicationswith feedback, then apply control techniques, and finally generalize (e.g., [Klein et al.2014]).Control smells. A code smell is “any symptom in the source code of a program thatpossibly indicates a deeper problem” [Fowler and Beck 1999]. While code smells arenot bugs by themselves, they may indicate a design weakness. When a code smell isdiscovered, an engineer must decide whether or not to redesign. Categorizing commoncode smells and remedies allows engineers to focus on those parts of the program thatare more likely to cause problems. Analogously, control smells may indicate where inthe code it would be possible to add control solutions to improve a software ability ofaching its goals. Automatic knob identification is an example of such applications [Har-man et al. 2014]. Furthermore, control smells can point to erroneous or misplaced im-plementation of controllers. For example, implementing two uncoordinated controllerstargeting the same goal but using different actuators may lead to oscillating behavior;if the problem is detected, a remedy might be revising the control allocation or devisinga multiple-output controller.Testing and debugging of controlled systems. A self-adaptive software system’speculiar reactions to different environmental conditions challenge traditional qualityassurance approaches. In Section 2.7 we reviewed recent results on validation andverification of controlled systems. However, controllers alter the behavior of the sys-tem, possibly leading to emerging behaviors not considered when analyzing the systemalone. Verification and validation, as well as testing, of the controlled system has totake into account the interactions between the controller, the system and its environ-ment. How to reproduce a self-adaptive system’s execution, how to identify an error’ssource, and how to debug it are just examples of the challenges controllable softwareplaces to software engineering.Making non-adaptive software adaptive. While design for controllability is ar-guably a long-term goal, empowering existing software with adaptation capabilities isa short-term goal, which would improve its robustness and scalability. Most currentsoftware is not adaptive. Enabling adaptation requires identifying both actuators thatalter behavior and sensors that measure this behavior, as well as devising controllersfor each actuator. The identified sensors and actuators have to guarantee the observ-ability and controllability of the system; observability is the property that the systemstate is inferable from the sensor measurements; controllability is the property thatthe available actuators can drive the system towards its goals. While in control theory

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 23: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:23

these concepts are formalized and have a mathematical counterpart for many classesof dynamic models it is still unclear how to assess these property for software artifacts.Automatic synthesis and update of controllers. Most software is not designedwith controllability in mind. While on a long term vision controllability may becomea first-class concept in software design, (semi-)automated controller synthesis canprovide a suitable solution for a variety of problems. Though no general synthesismethod is currently known, several classes of software control problem are addressedwith automated synthesis techniques (e.g., [Filieri et al. 2014; D’ippolito et al. 2013;Filieri et al. 2011b; Hoffmann 2014]. Automatic learning for knowledge-based con-trollers [Tesauro 2007] and auto-tuning mechanisms tailoring general solutions tospecific cases are other means allowing practitioners to apply control theoretical adap-tation mechanisms without mastering the underlying mathematical knowledge. Thisthread of research can empower practitioners with broadly applicable, ready to usesolutions for several adaptation problems.Coordinating multiple controllers. Complex software systems may require mul-tiple controllers, possibly of different types. For example, functional adaptation canbe managed by event-based controllers, able to select different component assembliesto guarantee safety or liveness. Quantitative goals can be handled by equation-basedcontrollers. Multiple controllers may interact, introducing undesirable emerging be-haviors, possibly compromising system stability. Analogous problems occur in assem-bly of off-the-shelf components that may embed self-adaptation capabilities. Even self-adaptive execution infrastructures may interfere with the behavior of self-adaptivesoftware. Examples are cloud-based execution environments, but also modern CPUs,such as the turbo-boost capabilities of some Intel processors able to automaticallyadapt the degree of parallelism and frequency of the cores without. The coordinationof multiple controllers is an open challenge for both Software Engineering and ControlTheory. Several results from the analysis of distributed and concurrent systems havebeen adapted for event-based controllers; however, these results can hardly cope withheterogeneous controllers.

5.2. The control engineering perspectiveControl theory has developed a broad set of mathematically grounded techniques forthe control of physical processes. While several of these techniques have been adaptedto different domains, software systems place unprecedented challenges to control so-lutions. In particular, many software behaviors are arguably not bounded to the lawof the physical world requiring control engineers the development of new modeling,sensing, actuation, and control techniques. While this development is most likely ajoint effort of software and control engineers to design controllable software on a firsthand, control engineers are challenges with changing their perspective on several es-tablished problems.Software models for control. Software systems typically lack the mathematicalmodels as described in Section 2.3. Many of the control methods described in Sec-tion 3 require a model in terms of a differential or difference equations which softwareengineers would not normally define. To model software systems mathematically, thenumerous methods from system identification can be used [Ljung 2012]. Broadly, thesemethods construct models using measurements. For software, building these modelsmight be easier than for physical systems as running software is usually cheaper andcan be done faster. This means large amounts of data for identification purposes areeasily available. Nonetheless, system identification may be ineffective without the pre-liminary definition of a class of models to be trained from data. Software behaviorsmay be difficult to cast into established classes of differential or difference equations,possibly requiring the definition of more complex classes of hybrid models.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 24: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:24

Monitoring and Instrumentation. While it can be prohibitively expensive to add asensor to an existing physical system, it is relatively easy to add some code to a pieceof software to enable another measurement of its internal state. This can be a “short-cut” towards achieving certain control goals which for physical systems would requireextensive filter design or complex control strategies. On the other hand, it requiresthe development of techniques for the, possibly optimal, instrumentation of the codenot only to satisfy control needs (e.g., observability) but also taking into account theimpact of the monitoring infrastructure on the production systems, both in terms ofeffectiveness and overhead.Software Actuation. In general, physical plants that need control strategies aredesigned to be controlled, according to the natural constraints imposed by physics.Pumps and valves are placed following standard or established industrial design pro-cess. Such standard processes do not exist for controllable software and need to bedefined. Similarly to the design of monitoring infrastructures, the optimal design ofactuators has to take into account both control needs (e.g., controllability) and soft-ware production needs, again in terms of effectiveness but also impact on the globalsoftware design and cost of actuation.

REFERENCESN. Abbas, J. Andersson, and D. Weyns. 2011. Knowledge evolution in autonomic software product lines. In

Proceedings of the 15th International Software Product Line Conference on - SPLC ’11. ACM Press, NewYork, New York, USA, 1.

T. Abdelzaher, Y. Diao, J. Hellerstein, C. Lu, and X. Zhu. 2008. Introduction to Control Theory And ItsApplication to Computing Systems. In Performance Modeling and Engineering, Zhen Liu and CathyH.Xia (Eds.). Springer US, 185–215.

Alessia Knauss, Daniela Damian, Xavier Franch, Angela Rook, Hausi A. Mller, and Alex Thomo. 2016.ACon: A Learning-Based Approach to Deal with Uncertainty in Contextual Requirements at Runtime.Information and Software Technology 70 (2016), 85–99.

R. Alur. 2011. Formal verification of hybrid systems. In Embedded Software (EMSOFT), 2011 Proceedingsof the International Conference on. 273–278.

Z. Artstein. 1980. Discrete and Continuous Bang-Bang and Facial Spaces Or: Look for the Extreme Points.SIAM Rev. 22, 2 (1980), 172–185.

K. Astrom. 2008. Event Based Control. In Analysis and Design of Nonlinear Control Systems, A. Astolfi andL. Marconi (Eds.). Springer Berlin Heidelberg, 127–147.

K. Astrom and T. Hagglund. 2006. Advanced PID control. ISA-The Instrumentation, Systems, and Automa-tion Society, Research Triangle Park, NC 27709.

K. Astrom and R. Murray. 2008. Feedback Systems: An Introduction for Scientists and Engineers. PrincetonUniversity Press.

K. Astrom and B. Wittenmark. 2013. Adaptive Control. Dover Publications.W. Baek and T. M. Chilimbi. 2010. Green: A Framework for Supporting Energy-conscious Programming Us-

ing Controlled Approximation. In Proceedings of the 2010 ACM SIGPLAN Conference on ProgrammingLanguage Design and Implementation (PLDI ’10). ACM, New York, NY, USA, 198–209.

C. Baier, M. Grer, M. Leucker, B. Bollig, and F. Ciesinski. 2004. Controller synthesis for probabilistic sys-tems. In In Proceedings of IFIP TCS2004. Kluwer.

S. Balsamo, A. D. Marco, P. Inverardi, and M. Simeoni. 2004. Model-based performance prediction in soft-ware development: a survey. IEEE Transactions on Software Engineering 30, 5 (May 2004), 295–310.

L. Baresi, E. D. Nitto, and C. Ghezzi. 2006. Toward open-world software: Issues and challenges. Computer39, 10 (Oct 2006), 36–43.

T. Batista, A. Joolia, and G. Coulson. 2005. Managing Dynamic Reconfiguration in Component-based Sys-tems. In Proceedings of the 2Nd European Conference on Software Architecture (EWSA’05). Springer-Verlag, Berlin, Heidelberg, 1–17.

A. Bemporad and M. Morari. 1999. Robust model predictive control: A survey. In Robustness in identificationand control, A. Garulli and A. Tesi (Eds.). Lecture Notes in Control and Information Sciences, Vol. 245.Springer London, 207–226.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 25: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:25

N. Bencomo and A. Belaggoun. 2014. A world full of surprises: Bayesian theory of surprise to quantifydegrees of uncertainty. In Companion Proceedings of the 36th International Conference on SoftwareEngineering. ACM, 460–463.

M. Benedikt, R. Lenhardt, and J. Worrell. 2013. LTL model checking of interval Markov chains. In Tools andAlgorithms for the Construction and Analysis of Systems. Springer, 32–46.

S. Boyd, C. Baratt, and S. Norman. 1990. Linear controller design: limits of performance via convex opti-mization. Proc. IEEE 78, 3 (Mar 1990), 529–574.

V. A. Braberman, N. D’Ippolito, J. Kramer, D. Sykes, and S. Uchitel. 2015. MORPH: A Reference Architecturefor Configuration and Behaviour Self-Adaptation. In 1st International Workshop on Control Theory forSoftware Engineering - CTSE 2015, Bergamo, Italy, August 30 - September 4,, 2015.

T. Brazdil, V. Forejt, and A. Kucera. 2008. Controller Synthesis and Verification for Markov Decision Pro-cesses with Qualitative Branching Time Objectives (Lecture Notes in Computer Science), Vol. 5126.Springer, 148–159.

F. Brosig, P. Meier, S. Becker, A. Koziolek, H. Koziolek, and S. Kounev. 2015. Quantitative Evaluation ofModel-Driven Performance Analysis and Simulation of Component-Based Architectures. IEEE Trans-actions on Software Engineering 41, 2 (Feb. 2015), 157–175.

D. Brugali. 2007. Software Engineering for Experimental Robotics. Springer Berlin Heidelberg.Y. Brun, G. Marzo Serugendo, C. Gacek, H. Giese, H. Kienle, M. Litoiu, H. Muller, M. Pezze, and M. Shaw.

2009. Engineering Self-Adaptive Systems through Feedback Loops. In Software Engineering for Self-Adaptive Systems. Lecture Notes in Computer Science, Vol. 5525. Springer, 48–70.

K.-Y. Cai. 2002. Optimal software testing and adaptive software testing in the context of software cybernet-ics. Information and Software Technology 44, 14 (2002), 841 – 855.

R. Calinescu, C. Ghezzi, M. Kwiatkowska, and R. Mirandola. 2012. Self-adaptive software needs quantita-tive verification at runtime. Commun. ACM 55, 9 (2012), 69–77.

A. Camacho, P. Marti, M. Velasco, C. Lozoya, R. Villa, J. M. Fuertes, and E. Griful. 2010. Self-triggerednetworked control systems: An experimental case study. In Industrial Technology (ICIT), 2010 IEEEInternational Conference on. 123–128.

E. Camacho and C. Alba. 2013. Model Predictive Control. Springer London.J. Camara, R. de Lemos, N. Laranjeiro, R. Ventura, and M. Vieira. 2013. Robustness Evaluation of Con-

trollers in Self-Adaptive Software Systems. In Dependable Computing (LADC), 2013 Sixth Latin-American Symposium on. 1–10.

J. Camara, G. A. Moreno, and D. Garlan. 2015. Reasoning About Human Participation in Self-adaptiveSystems. In Proceedings of the 10th International Symposium on Software Engineering for Adaptiveand Self-Managing Systems (SEAMS ’15). IEEE Press, Piscataway, NJ, USA, 146–156.

M. C. Campi and S. Garatti. 2011. A Sampling-and-Discarding Approach to Chance-Constrained Optimiza-tion: Feasibility and Optimality. Journal of Optimization Theory and Applications 148, 2 (2011), 257–280.

M. C. Campi, S. Garatti, and M. Prandini. 2009. The scenario approach for systems and control design.Annual Reviews in Control 33, 2 (2009), 149–157.

K. Chatterjee, T. A. Henzinger, B. Jobstmann, and R. Singh. 2015. Measuring and Synthesizing Systems inProbabilistic Environments. J. ACM 62, 1, Article 9 (March 2015), 34 pages.

K. Chatterjee, M. Jurdzinski, and T. A. Henzinger. 2003. Simple Stochastic Parity Games. In ComputerScience Logic, Matthias Baaz and Johann A. Makowsky (Eds.). Lecture Notes in Computer Science, Vol.2803. Springer Berlin Heidelberg, 100–113.

K. Chatterjee, M. Jurdzinski, and T. A. Henzinger. 2004. Quantitative Stochastic Parity Games. In Proceed-ings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’04). Society forIndustrial and Applied Mathematics, Philadelphia, PA, USA, 121–130.

K. Chatterjee, K. Sen, and T. A. Henzinger. 2008. Model-checking ω-regular properties of interval Markovchains. In Foundations of Software Science and Computational Structures. Springer, 302–317.

T. Chen, T. Han, and M. Kwiatkowska. 2013. On the complexity of model checking interval-valued discretetime markov chains. Inform. Process. Lett. 113, 7 (2013), 210–216.

B. Cheng, R. Lemos, H. Giese, P. Inverardi, J. Magee, J. Andersson, B. Becker, N. Bencomo, Y. Brun, B. Cukic,G. Marzo Serugendo, S. Dustdar, A. Finkelstein, C. Gacek, K. Geihs, V. Grassi, G. Karsai, H. Kienle,J. Kramer, M. Litoiu, S. , R. Mirandola, H. Muller, S. Park, M. Shaw, M. Tichy, M. Tivoli, D. Weyns,and J. Whittle. 2009. Software Engineering for Self-Adaptive Systems. In Software Engineering for Self-Adaptive Systems. Springer-Verlag, Berlin, Heidelberg, Chapter Software Engineering for Self-AdaptiveSystems: A Research Roadmap, 1–26.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 26: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:26

S.-W. Cheng and D. Garlan. 2007. Handling Uncertainty in Autonomic Systems. In Proceedings of the In-ternational Workshop on Living with Uncertainties (IWLU07), co-located with the 22nd InternationalConference on Automated Software Engineering (ASE07).

E. Darulova and V. Kuncak. 2013. Certifying Solutions for Numerical Constraints. Springer Berlin Heidel-berg, Berlin, Heidelberg, 277–291.

E. Darulova and V. Kuncak. 2014. Sound Compilation of Reals. In Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL ’14). ACM, New York, NY, USA,235–248.

E. M. Dashofy, A. van der Hoek, and R. N. Taylor. 2002. Towards Architecture-based Self-healing Systems.In Proceedings of the First Workshop on Self-healing Systems (WOSS ’02). ACM, New York, NY, USA,21–26.

C. Dehnert, S. Junges, N. Jansen, F. Corzilius, M. Volk, H. Bruintjes, J.-P. Katoen, and E. Abraham. 2015.PROPhESY: A PRObabilistic ParamEter SYnthesis Tool. Springer, 214–231.

G. Delaval, E. Rutten, and H. Marchand. 2013. Integrating discrete controller synthesis into a reactiveprogramming language compiler. Discrete Event Dynamic Systems 23, 4 (2013), 385–418.

Y. Diao, J. Hellerstein, S. Parekh, R. Griffith, G. Kaiser, and D. Phung. 2005. Self-managing systems: acontrol theory foundation. In ECBS Workshop. 441–448.

Y. Diao, J. L. Hellerstein, S. Parekh, R. Griffith, G. E. Kaiser, and D. Phung. 2006. A Control Theory Foun-dation for Self-managing Computing Systems. IEEE J.Sel. A. Commun. 23, 12 (Sept. 2006), 2213–2222.

Z. Ding, Y. Zhou, and M. Zhou. 2014. Modeling Self-adaptive Software Systems with Learning Petri Nets. InCompanion Proceedings of the 36th International Conference on Software Engineering (ICSE Compan-ion 2014). ACM, New York, NY, USA, 464–467.

N. D’ippolito, V. Braberman, N. Piterman, and S. Uchitel. 2013. Synthesising Non-Anomalous Event-BasedControllers for Liveness Goals. ACM Tran. Softw. Eng. Methodol. 22, Article 9 (2013), 36 pages.

N. R. D’Ippolito, V. Braberman, N. Piterman, and S. Uchitel. 2010. Synthesis of Live Behaviour Models.In Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of softwareengineering (FSE ’10). ACM, New York, NY, USA, 77–86.

R. Dorf and R. Bishop. 2008. Modern control systems. Prentice Hall.C. Dubach, P. Cheng, R. Rabbah, D. F. Bacon, and S. J. Fink. 2012. Compiling a High-level Language for

GPUs: (via Language Support for Architectures and Compilers). In Proceedings of the 33rd ACM SIG-PLAN Conference on Programming Language Design and Implementation (PLDI ’12). ACM, New York,NY, USA, 1–12.

J. Eker, J. W. Janneck, E. A. Lee, J. Liu, X. Liu, J. Ludvig, S. Neuendorffer, S. Sachs, and Y. Xiong. 2003.Taming heterogeneity-the Ptolemy approach. Proc. IEEE 91, 1 (2003), 127–144.

I. Epifani, C. Ghezzi, R. Mirandola, and G. Tamburrelli. 2009. Model evolution by run-time parameter adap-tation. In Software Engineering, 2009. ICSE 2009. IEEE 31st International Conference on. IEEE, 111–121.

N. Esfahani, E. Kouroshfar, and S. Malek. 2011. Taming Uncertainty in Self-Adaptive Software. In SIG-SOFT/FSE ’11: Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conferenceon Foundations of software engineering. ACM, 234–244.

N. Esfahani and S. Malek. 2013. Uncertainty in Self-Adaptive Software Systems. In Software Engineeringfor Self-Adaptive Systems II, Rogerio de Lemos, Holger Giese, Hausi A. Muller, and Mary Shaw (Eds.).Vol. 7475. Springer Berlin Heidelberg, 214–238.

J. Filar and K. Vrieze. 1996. Competitive Markov Decision Processes. Springer-Verlag New York, Inc., NewYork, NY, USA.

A. Filieri, C. Ghezzi, A. Leva, and M. Maggio. 2011b. Self-adaptive software meets control theory: A prelim-inary approach supporting reliability requirements. In 26th IEEE/ACM International Conference onAutomated Software Engineering (ASE 2011), Lawrence, KS, USA, November 6-10, 2011, Perry Alexan-der, Corina S. Pasareanu, and John G. Hosking (Eds.). IEEE, 283–292.

A. Filieri, C. Ghezzi, and G. Tamburrelli. 2011a. Run-time Efficient Probabilistic Model Checking. In Pro-ceedings of the 33rd International Conference on Software Engineering (ICSE ’11). ACM, New York, NY,USA, 341–350.

A. Filieri, H. Hoffmann, and M. Maggio. 2014. Automated Design of Self-adaptive Software with Control-theoretical Formal Guarantees. In Proceedings of the 36th International Conference on Software Engi-neering (ICSE 2014). ACM, New York, NY, USA, 299–310.

A. Filieri, H. Hoffmann, and M. Maggio. 2015a. Automated Multi-Objective Control for Self-Adaptive Soft-ware Design. In Proceedings of the 10th Joint Meeting of the European Software Engineering Conference

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 27: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:27

and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2015).ACM, 13–24.

A. Filieri, M. Maggio, K. Angelopoulos, N. D’Ippolito, I. Gerostathopoulos, A. Hempel, H. Hoffmann,P. Jamshidi, E. Kalyvianaki, C. Klein, F. Krikava, S. Misailovic, A. Papadopoulos, S. Ray, A. Sharifloo,S. Shevtsov, M. Ujma, and T. Vogel. 2015b. Software Engineering Meets Control Theory. In Proceedingsof the 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems(SEAMS ’15). IEEE, 12.

M. Fowler and K. Beck. 1999. Refactoring: Improving the Design of Existing Code. Addison-Wesley.M. Franzle and C. Herde. 2007. HySAT: An efficient proof engine for bounded model checking of hybrid

systems. Formal Methods in System Design 30, 3 (2007), 179–198.S. Gao, S. Kong, and E. Clarke. 2013. Satisfiability modulo ODEs. In Formal Methods in Computer-Aided

Design (FMCAD), 2013. 105–112.D. Garlan, S. Cheng, A. Huang, B. R. Schmerl, and P. Steenkiste. 2004a. Rainbow: Architecture-Based Self-

Adaptation with Reusable Infrastructure. IEEE Computer 37, 10 (2004), 46–54.D. Garlan, S.-W. Cheng, A.-C. Huang, B. Schmerl, and P. Steenkiste. 2004b. Rainbow: architecture-based

self-adaptation with reusable infrastructure. Computer 37, 10 (Oct 2004), 46–54.H. Giese, N. Bencomo, L. Pasquale, A. J. Ramirez, P. Inverardi, S. Watzoldt, and S. Clarke. 2014. Living with

uncertainty in the age of runtime models. In Models@ run. time. Springer, 47–100.R. Goebel, R. Sanfelice, and A. Teel. 2012. Hybrid Dynamical Systems: Modeling, Stability, and Robustness.

Princeton University Press.G. C. Goodwin, H. Kong, G. Mirzaeva, and M. M. Seron. 2014. Robust model predictive control: reflections

and opportunities. Journal of Control and Decision 1, 2 (April 2014), 115–148.S. Govindan, J. Liu, A. Kansal, and A. Sivasubramaniam. 2011. Cuanta: Quantifying Effects of Shared

On-chip Resource Interference for Consolidated Virtual Machines. In Proceedings of the 2Nd ACM Sym-posium on Cloud Computing (SOCC ’11). ACM, New York, NY, USA, Article 22, 14 pages.

L. Grune and J. Pannek. 2011. Nonlinear model predictive control. Springer.E. M. Hahn, H. Hermanns, B. Wachter, and L. Zhang. 2010. PARAM: A Model Checker for Paramet-

ric Markov Models. In Proc. 22nd International Conference on Computer Aided Verification (CAV’10)(LNCS), Vol. 6174. Springer, 660–664.

M. Harman, Y. Jia, W. B. Langdon, J. Petke, I. H. Moghadam, S. Yoo, and F. Wu. 2014. Genetic Improvementfor Adaptive Software Engineering (Keynote). In Proceedings of the 9th International Symposium onSoftware Engineering for Adaptive and Self-Managing Systems (SEAMS 2014). ACM, New York, NY,USA, 1–4.

E. N. Hartley, J. L. Jerez, A. Suardi, J. M. Maciejowski, E. C. Kerrigan, and G. A. Constantinides. 2014.Predictive Control Using an FPGA With Application to Aircraft Control. IEEE Transactions on ControlSystems Technology 22, 3 (May 2014), 1006–1017.

W. P. M. H. Heemels, K. H. Johansson, and P. Tabuada. 2012. An introduction to event-triggered and self-triggered control. In Decision and Control (CDC), 2012 IEEE 51st Annual Conference on. 3270–3285.

W. P. M. H. Heemels, J. H. Sandee, and P. P. J. Van Den Bosch. 2008. Analysis of event-driven controllers forlinear systems. Internat. J. Control 81, 4 (2008), 571–590.

J. L. Hellerstein. 2009. Engineering Autonomic Systems. In Proceedings of the 6th International Conferenceon Autonomic Computing (ICAC ’09). ACM, New York, NY, USA, 75–76.

J. L. Hellerstein, Y. Diao, S. Parekh, and D. M. Tilbury. 2004. Feedback Control of Computing Systems. JohnWiley & Sons.

J. L. Hellerstein, V. Morrison, and E. Eilebrecht. 2010. Applying control theory in the real world: experiencewith building a controller for the .NET thread pool. SIGMETRICS Perform. Eval. Rev. 37, 3 (jan 2010),38–42.

T. Henzinger, P.-H. Ho, and H. Wong-Toi. 1997. HyTech: A model checker for hybrid systems. In ComputerAided Verification. Lecture Notes in Computer Science, Vol. 1254. Springer Berlin Heidelberg, 460–463.

T. A. Henzinger, P. W. Kopke, A. Puri, and P. Varaiya. 1998. What’s Decidable about Hybrid Automata? J.Comput. System Sci. 57, 1 (1998), 94–124.

H. Hermes and J. P. Lasalle. 1969. Functional analysis and time optimal control. Elsevier Science.H. Hoffmann. 2014. CoAdapt: Predictable Behavior for Accuracy-Aware Applications Running on Power-

Aware Systems. In Real-Time Systems (ECRTS), 2014 26th Euromicro Conference on. IEEE, 223–232.H. Hoffmann. 2015. Software Engineering Meets Control Theory. In Proceedings of the 25th International

Symposium on Operating Systems (SOSP ’15). ACM, 14.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 28: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:28

H. Hoffmann, S. Sidiroglou, M. Carbin, S. Misailovic, A. Agarwal, and M. Rinard. 2011. Dynamic Knobsfor Responsive Power-aware Computing. In Proceedings of the Sixteenth International Conference onArchitectural Support for Programming Languages and Operating Systems (ASPLOS XVI). ACM, NewYork, NY, USA, 199–212.

P. Inverardi and M. Tivoli. 2003. Software architecture for correct components assembly. In Formal Methodsfor Software Architectures. Springer, 92–121.

H. I. Ismail, I. V. Bessa, L. C. Cordeiro, E. B. de Lima Filho, and J. E. Chaves Filho. 2015. DSVerifier: ABounded Model Checking Tool for Digital Systems. Springer International Publishing, Cham, 126–131.

P. Jamshidi, A. Ahmad, and C. Pahl. 2014. Autonomic resource provisioning for cloud-based software. In Pro-ceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-ManagingSystems (SEAMS 2014). ACM, New York, NY, USA, 95–104.

P. Jamshidi, A. M. Sharifloo, C. Pahl, H. Arabnejad, A. Metzger, and G. Estrada. 2016. Fuzzy Self-LearningControllers for Elasticity Management in Dynamic Cloud Architectures. QoSA (2016).

J. Jantzen. 2013. Foundations of Fuzzy Control: A Practical Approach.S. Kang and D. Garlan. 2014. Architecture-Based Planning of Software Evolution. International Journal of

Software Engineering and Knowledge Engineering 24, 2 (2014), 211–242.J. Kephart and D. Chess. 2003. The vision of autonomic computing. Computer 36, 1 (Jan 2003), 41–50.E. C. Kerrigan. 2014. Co-design of hardware and algorithms for real-time optimization. In Control Confer-

ence (ECC), 2014 European. 2484–2489.H. Khalil. 2015. Nonlinear Control. Pearson Education Limited.C. Klein, M. Maggio, K.-E. Arzen, and F. Hernandez-Rodriguez. 2014. Brownout: Building More Robust

Cloud Applications. In Proceedings of the 36th International Conference on Software Engineering (ICSE2014). ACM, New York, NY, USA, 700–711.

M. Kong, R. Veras, K. Stock, F. Franchetti, L.-N. Pouchet, and P. Sadayappan. 2013. When Polyhedral Trans-formations Meet SIMD Code Generation. In Proceedings of the 34th ACM SIGPLAN Conference on Pro-gramming Language Design and Implementation (PLDI ’13). ACM, New York, NY, USA, 127–138.

S. Kowshik, D. Dhurjati, and V. Adve. 2002. Ensuring Code Safety Without Runtime Checks for Real-timeControl Systems. In Proceedings of the 2002 International Conference on Compilers, Architecture, andSynthesis for Embedded Systems (CASES ’02). ACM, New York, NY, USA, 288–297.

J. Kramer and J. Magee. 2007. Self-Managed Systems: an Architectural Challenge. In International Con-ference on Software Engineering, ISCE 2007, Workshop on the Future of Software Engineering, FOSE2007, May 23-25, 2007, Minneapolis, MN, USA. 259–268.

F. Krikava, P. Collet, R. France, and others. 2014. ACTRESS: Domain-Specific Modeling of Self-AdaptiveSoftware Architectures. In Symposium On Applied Computing, track on Dependable and Dependableand Adaptive Distributed Systems.

P. R. Kumar and P. Varaiya. 1986. Stochastic Systems: Estimation, Identification and Adaptive Control.Prentice-Hall, Upper Saddle River, NJ, USA.

M. Kwiatkowska and D. Parker. 2013. Automated Verification and Strategy Synthesis for Probabilistic Sys-tems. In Proc. 11th International Symposium on Automated Technology for Verification and Analysis(ATVA’13) (LNCS), Vol. 8172. Springer, 5–22.

G. Labinaz, M. M. Bayoumi, and K. Rudie. 1997. A survey of modeling and control of hybrid systems. AnnualReviews in Control 21 (1997), 79–92.

P. Lama and X. Zhou. 2013. Autonomic Provisioning with Self-adaptive Neural Fuzzy Control for Percentile-based Delay Guarantee. Transactions on Autonomous and Adaptive Systems (TAAS) 8, 2 (2013), 9.

S. Lambert. 2001. Knowledge-Based Control Systems. In SOFSEM 2001: Theory and Practice of Informatics,Leszek Pacholski and Peter Ruzicka (Eds.). Lecture Notes in Computer Science, Vol. 2234. SpringerBerlin Heidelberg, 75–89.

E. Letier, D. Stefan, and E. T. Barr. 2014. Uncertainty, Risk, and Information Value in Software Require-ments and Architecture. In Proceedings of the 36th International Conference on Software Engineering(ICSE 2014). ACM, New York, NY, USA, 883–894.

A. Leva, M. Maggio, A. Papadopoulos, and F. Terraneo. 2013. Control-based operating system design. IET.A. Leva and A. Papadopoulos. 2013. Tuning of event-based industrial controllers with simple stability guar-

antees. Journal of Process Control 23, 9 (2013), 1251–1260.W. Levine. 2010. The Control Systems Handbook, Second Edition: Control System Advanced Methods, Second

Edition. Taylor and Francis.D. Liberzon. 2003. Switching in Systems and Control. Birkhauser Boston.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 29: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:29

J. Liu, J. Eker, and J. Janneck. 2004. Actor-oriented control system design: A responsible framework per-spective. IEEE Transactions on Control Systems Technology 12, 2 (2004), 250 – 262.

L. Ljung. 2012. System identification: theory for the user (2 ed.). Prentice Hall PTR, Upper Saddle River, NJ.J. Lunze and F. Lamnabhi-Lagarrigue. 2009. Handbook of Hybrid Systems Control: Theory, Tools, Applica-

tions. Cambridge University Press.J. Lunze and D. Lehmann. 2010. A state-feedback approach to event-based control. Automatica 46, 1 (2010),

211–215.J. Maciejowski. 2002. Predictive control: with constraints. Pearson education.M. Maggio, H. Hoffmann, A. Papadopoulos, J. Panerati, M. Santambrogio, A. Agarwal, and A. Leva. 2012.

Comparison of Decision Making Strategies for Self-Optimization in Autonomic Computing Systems.ACM Transactions on Autonomous and Adaptive Systems 7, 4, Article 36 (Dec. 2012), 32 pages.

R. Matinnejad, S. Nejati, L. C. Briand, and T. Bruckmann. 2016. Automated Test Suite Generation forTime-continuous Simulink Models. In Proceedings of the 38th International Conference on SoftwareEngineering (ICSE ’16). ACM, New York, NY, USA, 595–606.

J. Mendel and D. Wu. 2010. Perceptual computing: aiding people in making subjective judgments. Wiley-IEEE Press.

M. Morari and E. Zafiriou. 1989. Robust Process Control. Prentice Hall, Englewood Cliffs.H. Muller, M. Pezze, and M. Shaw. 2008. Visibility of Control in Adaptive Systems. In Proceedings of the

2Nd International Workshop on Ultra-large-scale Software-intensive Systems (ULSSIS ’08). ACM, NewYork, NY, USA, 23–26.

A. Muscholl. 2015. Automated Synthesis of Distributed Controllers. Springer Berlin Heidelberg, Berlin, Hei-delberg, 11–27.

G. Ofenbeck, T. Rompf, A. Stojanov, M. Odersky, and M. Puschel. 2013. Spiral in Scala: Towards the Sys-tematic Construction of Generators for Performance Libraries. In Proceedings of the 12th InternationalConference on Generative Programming: Concepts & Experiences (GPCE ’13). ACM, New York, NY,USA, 125–134.

P. Oreizy, M. M. Gorlick, R. N. Taylor, D. Heimbigner, G. Johnson, N. Medvidovic, A. Quilici, D. S. Rosenblum,and A. L. Wolf. 1999. An Architecture-Based Approach to Self-Adaptive Software. IEEE Intelligent Sys-tems 14, 3 (May 1999), 54–62.

A. Papadopoulos, M. Maggio, F. Terraneo, and A. Leva. 2014. A Dynamic Modelling Framework for Control-based Computing System Design. Mathematical and Computer Modelling of Dynamical Systems (2014).

A. V. Papadopoulos, A. Ali-Eldin, K.-E. Arzen, J. Tordsson, and E. Elmroth. 2016. PEAS: A PerformanceEvaluation Framework for Auto-Scaling Strategies in Cloud Applications. ACM Transactions on Model-ing and Performance Evaluation of Computing Systems (TOMPECS) 1, 4, Article 15 (2016), 15:1–15:31pages.

S. Parekh. 2010. Feedback Control Techniques for Performance Management of Computing Systems. Ph.D.Dissertation. Seattle, WA, USA. Advisor(s) Lazowska, Edward D.

S. Parekh, N. Gandhi, J. Hellerstein, D. Tilbury, T. Jayram, and J. Bigus. 2002. Using Control Theory toAchieve Service Level Objectives In Performance Management. Real-Time Syst. 23, 1/2 (July 2002),127–141.

T. Patikirikorala, A. Colman, J. Han, and L. Wang. 2012. A Systematic Survey on the Design of Self-adaptiveSoftware Systems Using Control Engineering Approaches. In Proceedings of the 7th International Sym-posium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS ’12). IEEE Press,Piscataway, NJ, USA, 33–42.

N. Piterman, A. Pnueli, and Y. Sa’ar. 2006. Synthesis of Reactive(1) Designs. In Verification, Model Checking,and Abstract Interpretation, E.Allen Emerson and KedarS. Namjoshi (Eds.). Lecture Notes in ComputerScience, Vol. 3855. Springer Berlin Heidelberg, 364–380.

A. Pnueli. 1977. The Temporal Logic of Programs. In 18th Annual Symposium on Foundations of ComputerScience, Providence, Rhode Island, USA, 31 October - 1 November 1977.

A. Pnueli and R. Rosner. 1989. On the Synthesis of a Reactive Module. In Conference Record of the SixteenthAnnual ACM Symposium on Principles of Programming Languages, Austin, Texas, USA, January 11-13,1989. 179–190.

A. Puggelli, W. Li, A. L. Sangiovanni-Vincentelli, and S. A. Seshia. 2013. Polynomial-time verification ofPCTL properties of MDPs with convex uncertainties. In Computer Aided Verification. Springer, 527–542.

M. L. Puterman. 1994. Markov Decision Processes: Discrete Stochastic Dynamic Programming (1st ed.). JohnWiley & Sons, Inc., New York, NY, USA.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 30: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:30

W. Qian, X. Peng, B. Chen, J. Mylopoulos, H. Wang, and W. Zhao. 2014. Rationalism with a dose of em-piricism: Case-based reasoning for requirements-driven self-adaptation. In Requirements EngineeringConference (RE), 2014 IEEE 22nd International. 113–122.

R. R. Rajkumar, I. Lee, L. Sha, and J. Stankovic. 2010. Cyber-physical Systems: The Next Computing Revo-lution. In Proceedings of the 47th Design Automation Conference (DAC ’10). ACM, New York, NY, USA,731–736.

P. Ramadge and W. Wonham. 1989. The control of discrete event systems. Proc. IEEE 77, 1 (Jan 1989),81–98.

A. J. Ramirez, B. H. C. Cheng, N. Bencomo, and P. Sawyer. 2012a. Relaxing Claims: Coping with UncertaintyWhile Evaluating Assumptions at Run Time. Springer Berlin Heidelberg, Berlin, Heidelberg, 53–69.

A. J. Ramirez, A. C. Jensen, and B. H. C. Cheng. 2012b. A taxonomy of uncertainty for dynamically adap-tive systems. In 2012 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS). IEEE, 99–108.

J. Rao, Y. Wei, J. Gong, and C.-Z. Xu. 2011. DynaQoS: Model-free Self-tuning Fuzzy Control of VirtualizedResources for QoS Provisioning. In International Workshop on Quality of Service (IWQoS). IEEE, 1–9.

T. Rompf and M. Odersky. 2010. Lightweight Modular Staging: A Pragmatic Approach to Runtime CodeGeneration and Compiled DSLs. In Proceedings of the Ninth International Conference on GenerativeProgramming and Component Engineering (GPCE ’10). ACM, New York, NY, USA, 127–136.

M. Salehie and L. Tahvildari. 2009. Self-adaptive Software: Landscape and Research Challenges. ACMTrans. Auton. Adapt. Syst. 4, 2, Article 14 (May 2009), 14:1–14:42 pages.

M. Samadi, D. A. Jamshidi, J. Lee, and S. Mahlke. 2014. Paraprox: Pattern-based Approximation for DataParallel Applications. In Proceedings of the 19th International Conference on Architectural Support forProgramming Languages and Operating Systems (ASPLOS ’14). ACM, New York, NY, USA, 35–50.

R. Sanz and J. Zalewski. 2003. Pattern-based control systems engineering. IEEE Control Systems 23, 3 (June2003), 43–60.

R. Scattolini. 2009. Architectures for distributed and hierarchical Model Predictive Control–a review. Jour-nal of Process Control 19, 5 (2009), 723–731.

S. Skogestad and I. Postlethwaite. 2007. Multivariable feedback control: analysis and design. Vol. 2. WileyNew York.

V. E. S. Souza, A. Lapouchnian, and J. Mylopoulos. 2011. System Identification for Adaptive Software Sys-tems: a Requirements Engineering Perspective. In Conceptual Modeling – ER 2011, Manfred Jeusfeld,Lois Delcambre, and Tok-Wang Ling (Eds.). Lecture Notes in Computer Science, Vol. 6998. SpringerBerlin Heidelberg, 346–361.

G. Stein. 2003. Respect the unstable. Control Systems, IEEE 23, 4 (Aug 2003), 12–25.Q. Sun, G. Dai, and W. Pan. 2008. LPV Model and Its Application in Web Server Performance Control. In

CSSE, Vol. 3. 486–489.P. Tabuada. 2007. Event-Triggered Real-Time Scheduling of Stabilizing Control Tasks. Automatic Control,

IEEE Transactions on 52, 9 (Sept 2007), 1680–1685.G. Tesauro. 2007. Reinforcement Learning in Autonomic Computing: A Manifesto and Case Studies. IEEE

Internet Computing 11, 1 (2007), 22–30.M. Tribastone. 2013. A Fluid Model for Layered Queueing Networks. IEEE Trans. Software Eng. 39, 6

(2013), 744–756.N. M. Villegas, H. A. Muller, G. Tamura, L. Duchien, and R. Casallas. 2011. A Framework for Evaluating

Quality-driven Self-adaptive Software Systems. In Proceedings of the 6th International Symposium onSoftware Engineering for Adaptive and Self-Managing Systems (SEAMS ’11). ACM, New York, NY,USA, 80–89.

T. Vogel and H. Giese. 2014. Model-Driven Engineering of Self-Adaptive Software with EUREMA. ACMTrans. Auton. Adapt. Syst. 8, 4, Article 18 (2014), 33 pages.

V. Vyatkin. 2013. Software Engineering in Industrial Automation: State-of-the-Art Review. IEEE Transac-tions on Industrial Informatics 9, 3 (Aug 2013), 1234–1249.

L. Wang, J. Xu, and M. Zhao. 2015. QoS-driven Cloud Resource Management through Fuzzy Model Predic-tive Control. In International Conference on Autonomic Computing (ICAC). IEEE.

W.-P. Wang, D. Tipper, and S. Banerjee. 1996. A simple approximation for modeling nonstationary queues.In INFOCOM ’96. Fifteenth Annual Joint Conference of the IEEE Computer Societies. Networking theNext Generation. Proceedings IEEE, Vol. 1. 255–262.

J. A. Whittaker and J. H. Poore. 1993. Markov Analysis of Software Specifications. ACM Trans. Softw. Eng.Methodol. 2, 1 (Jan. 1993), 93–106.

B. Wittenmark, K. Astrom, and K.-E. Arzen. 2002. Computer control: An overview. Technical Report.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.

Page 31: A Control Strategies for Self-Adaptive Software Systems · Papadopoulos, Suprio Ray, Amir M Sharifloo, Stepan Shevtsov, Mateusz Ujma, Thomas Vogel The pervasiveness and growing complexity

A:31

J. Xu, M. Zhao, J. Fortes, R. Carpenter, and M. Yousif. 2007. On the Use of Fuzzy Modeling in VirtualizedData Center Management. Fourth International Conference on Autonomic Computing (ICAC’07) (June2007), 25–25.

R. Yager and L. Zadeh. 2012. An Introduction to Fuzzy Logic Applications in Intelligent Systems. SpringerUS.

T. C. Yang. 2006. Networked control system: a brief survey. IEE Proceedings - Control Theory and Applica-tions 153, 4 (July 2006), 403–412.

N. Zhan, S. Wang, and H. Zhao. 2013. Formal Modelling, Analysis and Verification of Hybrid Systems. ICTACTraining School on Software Engineering (2013), 207–281.

M. Zhang, B. Selic, S. Ali, T. Yue, O. Okariz, and R. Norgren. 2016. Understanding Uncertainty in Cyber-Physical Systems: A Conceptual Model. (2016).

K. Zhou, J. C. Doyle, and K. Glover. 1996. Robust and Optimal Control. Prentice-Hall, Inc., Upper SaddleRiver, NJ, USA.

X. Zhu, M. Uysal, Z. Wang, S. Singhal, A. Merchant, P. Padala, and K. Shin. 2009. What does control theorybring to systems research? SIGOPS Oper. Syst. Rev. 43 (2009), 62–69.

P. Zuliani, A. Platzer, and E. M. Clarke. 2010. Bayesian Statistical Model Checking with Application toSimulink/Stateflow Verification. In Proceedings of the 13th ACM International Conference on HybridSystems: Computation and Control (HSCC ’10). ACM, New York, NY, USA, 243–252.

ACM Transactions on Autonomous and Adaptive Systems, Vol. V, No. N, Article A, Publication date: January YYYY.