χsuds-sdl: a tool for testing software architecture specifications

13

Click here to load reader

Upload: j-jenny-li

Post on 02-Aug-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: χSuds-SDL: A Tool for Testing Software Architecture Specifications

Software Quality Journal, 8, 241]253, 2000Q 2000 Kluwer Academic Publishers. Manufactured in The Netherlands.

xSuds-SDL: A Tool for Testing SoftwareArchitecture Specifications

� 4J. JENNY LI AND J. ROBERT HORGAN jjli;jrh @research.telcordia.com( )Telcordia Technologies formerly Bellcore , 445 South St., Morristown, NJ 07960-6438

Abstract. Available statistical data shows that the cost of repairing software faults rises dramatically inlater development stages. In particular, the new technology of generating implementation code fromarchitectural specifications requires highly reliable designs. Much research has been done at this stageusing verification and validation techniques to prove correctness in terms of certain properties. A

Žprominent approach of this category is model checking Atlee, J.M., and Gannon, J. 1993. State-basedŽ . .model checking of event-driven systems requirements. IEEE Trans. Software Eng., 19 1 : 24]40. Such

approaches and the approach of software testing are complementary. Testing reveals some errors thatcannot be easily identified through verification, and vice versa. This paper presents the technology andthe accompanying tool suite to the testing of software architecture specifications. We apply ourstate-of-the-art technology in software coverage testing, program diagnosis and understanding tosoftware architectural designs. Our technology is based on both the control flow and the data flow ofthe executable architectural specifications. It first generates a program flow diagram from the specifica-tion and then automatically analyses the coverage features of the diagram. It collects the correspondingflow data during the design simulation to be mapped to the flow diagram. The coverage information forthe original specification is then obtained from the coverage information of the flow diagram. This

Žtechnology has been used for C, Cqq, and Java, and has proven effective Agrawal, H., Alberti, J., Li,.J.J., et al. 1998. Mining system tests to aid software maintenance, IEEE Computer July, pp. 64]73.

Keywords: software architectural specification; coverage testing; testing tool; x Suds; Specification andŽ .Description Language SDL

1. Introduction

As software becomes more complex and distributed, structural issues become asimportant as issues of data structure and algorithm. Software programming ismoving towards high-level architecting, which raises a set of new research issues.Software source-code level programming often utilizes many tools such as debug-gers, purifiers, testers, etc. As programming moves up to the level of softwarearchitectural design specification, a set of new tools on the architecture specifica-tion level will be required. Without proper engineering tools, direct code genera-tion can only be used once with the first version. Figure 1 illustrates the situation.

Figure 1 shows that without proper tools, modifications to the software have tobe done on the source-code level. The automatic or semi-automatic code genera-tion can only be used once for the first version of the system. The debugging andenhancement have to be done on the source code level, because it is difficult onthe architectural specification level without tool support. It is like the situation ofthe beginning of the Fortran language. At that time, people sometimes wrote codes

Page 2: χSuds-SDL: A Tool for Testing Software Architecture Specifications

LI AND HORGAN242

Ž .Figure 1. A software development paradigm Li and Horgan 1998 .

in Fortran, and modified the generated assembly codes. Our set of software toolsallows the development process to move along the dashed lines instead of the solidlines in Figure 1. This also promises a more reliable and up-to-date softwarearchitectural specification.

The focus of our research is the telecommunications domain. Our softwareŽsystems are characterized by their huge size typically well over a million lines of

.code and complexity. These systems are composed of interdependent distributedcomponents, some developed in-house, some commercially available, and othersdeveloped by the customer. The system configuration, that is, the componentscomprising the system and their interconnection, typically varies for each customer.Each configuration is given in one architectural design specification.

Ž . ŽWe use Specification and Description Language SDL International Telegraph.and Telephone Consultative Committee 1998 as a specification language for

.software architecture. There are four reasons for our choice: 1 SDL can beextended to model software architecture. After all, software architecture is a

.high-level design abstraction. 2 SDL meets the requirements for an executableŽ .architecture description language Luckham et al. 1995 . SDL allows dynamic

creation and termination of process instances and their corresponding communica-tion paths during execution. Hence it is capable of modeling the architectures ofdynamic distributed systems in which the number of components and connectors

.may vary during system execution. 3 SDL can represent all four views of softwareŽ .architectures Kruchten 1995 . For example, SDL uses delay and non-delay chan-

Ž .nels to indicate the relative physical locations of components Belina et al. 1991 ..4 SDL specifications are executable. We can use its simulation to collect execution

trace files.This paper describes technologies underlying the tool suite we have developed

for architectural specification understanding, debugging and testing. We call ourŽ .tool suite x Software Understanding and Diagnosis System x Suds . x Suds

Page 3: χSuds-SDL: A Tool for Testing Software Architecture Specifications

x SUDS-SDL 243

contains a suite of tools: x Slice, xVue, x Prof, x Regress and xATAC. The focusof this paper is x Regress and xATAC, two testing tools.

The underlying technique is to create a flow graph of the specification, i.e. layingout its execution structure. It then instruments the simulator to collect executiontrace using the flow graph. The trace file records how many times a given part ofthe specification, such as a process, a transition, a decision, a state inputs, or a dataflow, has been exercised in each simulation of the specification. The coverage data

Ž .are then used to guide the simulation test case selection in xATAC tool and toprioritize and minimize the number of test cases in x Regress tool.

The remainder of the paper is organized as follows. Section 2 illustrates theunderlying theory of our approach through an SDL example. Section 3 describesthe xATAC tool for prioritizing specification for coverage testing and guiding testcase generation. Section 4 discusses the regression testing tool, x Regress, forreducing the number of test cases and prioritizing them. Section 5 concludesthrough discussions of the strength and limitations of our technique.

2. The underlying technology

x Regress prioritizes test cases to reduce the number of tests and xATACprioritizes architecture points to be tested to achieve the best efficiency. Thepriority is defined by the coverage information. Whichever test with highestcoverage has the highest priority and whichever architecture points that canachieve the highest coverage have the highest priority. The architecture priority iscalculated based on the flow graph and the test case priority is calculated based oncoverage information derived from simulation trace files.

Different test coverage tools use different coverage criteria. Some of the well-known criteria used in x Suds-SDL include: function coverage, basic transitioncoverage, and decision coverage. Function coverage simply checks that eachprocess of the SDL has been executed at least once. A basic transition is simply astatement sequence of the specification that is always executed sequentially,

Ž .including states and decision constructs it has no internal branching constructs .Basic transition coverage checks that each basic transition has been executed atleast once, which implies that each statement has been executed at least once.Decisions are conditional branches from one basic transition to another. Theycould be states or decision constructs. Decision coverage checks that each suchcondition, decision matching or input mapping, has been executed, so that all trueand false paths have been taken as well as all input alternatives and decisionalternatives. We use an SDL specification example to illustrate these concepts.

SDL is based on a model of communicating extended finite state machinesw Ž .xCEFSMs Brand and Zafiropulo 1983 . It provides hierarchical abstraction of asystem structure. The top level is a system level specification. Each system includessome blocks. Each block can include either blocks or processes. Blocks communi-cate through channels. Each channel can be either delaying or non-delaying. Eachprocess of a block is defined by an extended finite state machine. They communi-cate through signal routes. Signal routes have no delay. In general, SDL specifica-

Page 4: χSuds-SDL: A Tool for Testing Software Architecture Specifications

LI AND HORGAN244

Figure 2. System and block level specification of a small switch.

tion provides a process view of a system’s software architecture. Figure 2 shows thesystem and block level diagram of the architectural specification of a small switch.

Figure 2 shows that the system Li PBX includes two distributed blocks: Call-]Handler and ResourceMgr. CallHandler controls call processing and ResourceMgrinvolves inventory control and remote database access. The channel betweenCallHandler and ResourceMgr is a delaying one, which indicates the two blockscan be implemented on different CPUs with a nonnegligible delay. This reflects thereality that database information can be stored remotely. CallHandler includes twoprocesses: Caller and Callee. ResourceMgr has two processes: TMgr and CMgr.The process view of the software architecture is quite clear. The Caller processinteracts with the call originator party. Callee handles the terminator party. AndTMgr and CMgr control two kinds of resources. A partial specification for one ofthe processes, Caller, is given in Figure 3. It shows that the partial Caller has 18basic transitions.

Each process has one equivalent flow graph. Each node of the graph is either aŽ .transition or a part of a transition as defined as basic transitions previously . For

simplicity, we call each node in the diagram a basic transition. The branching ofthe nodes is caused either by the state transitions due to different inputs or bydecision matching. For example, following state ‘‘waitT’’ are three branches corre-sponding to input ‘‘gt,’’ ‘‘ngt’’ and ‘‘handUp,’’ and following decision ‘‘phoneNum’’

Page 5: χSuds-SDL: A Tool for Testing Software Architecture Specifications

x SUDS-SDL 245

Figure 3. Process level specification, partial caller.

are three branches corresponding to the matching value of the decision: ‘‘0,’’ ‘‘1,’’or ‘‘else.’’ The dashed lines connect the partial flow graph to the rest of the graph.The loops are given implicitly by reusing the name of a basic transition. Forexample, basic transition 6 goes back to basic transition 1 in a loop. In our tool, wecall coverage calculated based on basic transitions, ‘‘basic block coverage’’ andcoverages calculated based on decisions and states, branching nodes in the dia-gram, ‘‘decision coverage’’ to be consistent with the tools used for other languages.A flow graph for partial Caller, including the basic transitions and their branches isgiven in Figure 4.

Ž .The simulation testing reveals how many times each nodes in the flow graphare executed. This coverage information is then mapped back to the originalspecification to obtain the execution count of statements, decisions and states.

The original version of x Suds also used data flow coverage criteria, c-useŽ . Ž .variable inside a computation and p-use variable inside a predicate . Data flowcoverage is a bit more complicated because it looks at the ways each programvariable is defined and used. A definition is a statement that assigns a value to avariable. A use is an occurrence of that variable in another statement. Since theSDL specification emphasizes the control aspect of the system, we did not imple-

Page 6: χSuds-SDL: A Tool for Testing Software Architecture Specifications

LI AND HORGAN246

Figure 4. Flow graph of partial process caller.

ment the data flow coverage criteria. However, for other specification languagesŽ .where data objects are the emphasis, such as Unified Modeling Language UML

Ž .www.rational.com , it is important to implement the data flow coverage.

3. x ATAC coverage testing tool

Once an architectural specification has been written, it needs to be tested androrverified. The chance of an undiscovered fault is higher in the parts of thearchitecture that have not been simulated than in the ones that have. To identifyfaults, tests must create test cases that increase coverage.

One difference between an actual program and a specification is that specifica-tions allow informal text such as ‘‘blocking?’’ in Figure 3. We treat informal text asa regular statement because we keep track of the number of times a transition isreached and not its content. Informal text does not affect the trace informationcollection. The only effect of the informal text is that we cannot use the actualexecution, but only a simulation. We have developed our own specification simula-tion tool. We can also use existing commercial tools such as Telelogic SDTŽ . Ž .www.telelogic.com and Verilog Geobe www.verilog.com . Each test case in thiscontent is one simulator execution of the specification.

The xATAC component of x Suds helps with this testing process by prioritizingspecification transitions, states or decisions. Some transitions or states dominateothers in the same process in that, if the dominating one is covered by a test, thenmany others must also be covered. The dominating ones are found by analyzing theflow graph of the specification process. Thus the dominating ones are good placesfor the users to start in writing test cases; if they are covered then coverage of the

Ž .specification can be greatly increased with just a few test cases Agrawal 1994 . The

Page 7: χSuds-SDL: A Tool for Testing Software Architecture Specifications

x SUDS-SDL 247

Figure 5. Identifying coverage testing priorities.

dominator concept has great potential for organizing testing task systematically andefficiently. Figure 5 shows how xATAC displays the priorities before any test caseshave been run. In fact, this is the start-up screen of the tool.

In Figure 5, the color scale at the top of the window indicates that, if any one ofŽ .the transitions highlighted in red darkest in monochrome graph is tested, at least

31 transitions will be covered. The developer can concentrate on these in writinghisrher first test case. After this case has been run, the updated xATAC displaywill prioritize just those transitions that remain uncovered to help in creating thenext test case. For example, in this case, we design a first test case ‘‘atac.1’’ to coverthe red transition of ‘‘input donee; output relC.’’ Figure 6 is the updated displayafter running this first test.

Besides prioritizing the transitions to be tested, xATAC analysis tool allowstesters to measure how thoroughly a specification has been exercised by a set oftests, or how thoroughly it has been tested. The specification is instrumented,normally just before its code generation, so that it will produce a trace of executionwhen it is simulated. After the tests are run, xATAC analyzes the traces todetermine which components of the specification have been covered and generatea coverage summary. The summary includes function entry coverage, basic transi-tion coverage and decision coverage.

Figure 7 gives the xATAC summary of the small switch architecture afterrunning 15 tests. Each case has one associated coverage data. This summary is forthe decision coverage, including a percentage number and an absolute number ofdecisions being covered. The total number of decisions is 44. ‘‘atac.1’’ covers 17 ofthem. Clicking on the source file will show the tester which basic transition have

Page 8: χSuds-SDL: A Tool for Testing Software Architecture Specifications

LI AND HORGAN248

Figure 6. Updated display after the first test.

Ž .been covered highlighted in white . ‘‘Options’’ button allows the tester to switch toother summaries such as block coverage easily.

The above diagrams, Figures 5 and 6, show basic transition coverage except.Figure 8 shows decision coverage criteria in the xATAC tool.

Figure 7. Summary after 15 tests.

Page 9: χSuds-SDL: A Tool for Testing Software Architecture Specifications

x SUDS-SDL 249

Figure 8. Identifying decision coverage priorities.

4. x Regress: regression test tool

Software architecture designs are frequently modified to either fix a problem orimplement a new feature, even in the coding stage. Testing is required to ensurethat the new feature works and the old ones still work. It is not economical tore-run all the regression tests from the previous version after each change. In mostcases, the change may amount to less than 10% of the specification. Designersneed to focus on the part of the specification they have, in fact, changed and thendevelop or re-use tests that cover these parts as quickly as possible. The question ofhow to select a set of tests to be re-run and in what order arises. x Regress candetermine which cases do not cover any of the modified architecture. These testcases do not need to be rerun because the change does not affect them. Forexample, suppose a bug in the architecture has been found by the x Slice tool. Afixing afford is to add the release of resource ‘‘T’’ after handing up at the‘‘waitDig’’ state. The red point of the Figure 9 highlights the place to be changed.

x Regress identifies the tests that do not simulate the transition of ‘‘inputŽ .handUp; reset t1 ’’; at the state of ‘‘waitDig.’’ It turns out in this case that only test

case ‘‘atac.9’’ covers this transition. Therefore, we need only to rerun this test afterthe modification. We do not have to rerun the entire set of tests, 15 of them. We

Ž .have cut cost by 14r15 93.3% .Regression test set is often very large, because it grows as the design evolves.

Old tests are rarely discarded because it is hard to tell if they are redundant bylooking only at the description of the tests. x Regress help with this process by

Page 10: χSuds-SDL: A Tool for Testing Software Architecture Specifications

LI AND HORGAN250

Figure 9. A small modification to the architecture.

ranking the test cases in order of increasing coverage on the modified part of thearchitecture. The tool can also consider the cost of running each test. This exampleuses a cost of 1 for each case; the tester can specify a different cost of each test.

ŽFigure 10 shows how it ranks the 15 existing tests based on both transition basic.block coverage and the decision coverage, for the small switch architecture.

x Regress selects atac.1 as the first test case because it gives the maximumcoverage with respect to block and decision coverage per unit cost. It listssubsequent tests according to their additional coverage per unit.

The tester can use Figure 10 to choose a reduced set of tests that will bepermanently retained. In this case, test case ‘‘atac.11’’ can be discarded because itdoes not increase the total coverage of the test set. Note that it does not mean thatit covers the same transitions as the test ‘‘atac.10’’ does.

The ranking of the test is not unique. In fact, in this example, the tests can beranked differently to achieve the same coverage and cost efficiency. This order iscalled optimal order, which always provides the optimal answer. In this order, testcase ‘‘atac.11’’ again does not contribute to the increase of coverage. It confirmsthat this test can be discarded. The new ranking is given in Figure 11.

Alternatively, x Regress can also be used to find a subset of the original teststhat provides a desired level of coverage at minimal cost. In some situations, underthe time-to-market pressure, the testers are not able to run all the tests. Previousexperience shows that up to some level of testing, the efficiency of finding bugsreduces. In our example, coverage does not change much after running the first 7tests. We achieve an acceptable coverage while cutting the cost by half.

Page 11: χSuds-SDL: A Tool for Testing Software Architecture Specifications

x SUDS-SDL 251

Figure 10. Ranking of 15 tests.

Figure 11. Another ranking of 15 tests.

Page 12: χSuds-SDL: A Tool for Testing Software Architecture Specifications

LI AND HORGAN252

Running a small set of test cases does not guarantee to find bugs that could haveŽ .been detected by the full set of tests excluding the redundant ones . Since the

saving is dramatic, sometimes it is worth the risk. A reduced set of tests selected byx Regress is more likely to detect faults not detected by a manually selected subsetof the same size. It is more likely to detect faults not detected by a randomly orarbitrarily selected reduced set of the same size.

5. Conclusions

The use of traces or execution histories as an aid to testing is a well establishedtechnique for programming languages like C and Cqq. It includes some verysophisticated tools such as EXDAMS which store traces and allows the program-mer to replay execution and query the trace to show data values, data flows,

Ž .program source, and so on during testing and debugging sessions Balzer 1969 . Buttesting is rarely mentioned in the field of software architecture and design eventhough early fault detection can cut costs. Our x Suds-CrCqq tool suite iscapable of prioritizing the transitions and test cases for testing. We extended thistechnology to the software architecture specification level to allow more efficienttesting, including regression testing.

x Suds tools use mostly dynamic analysis, which uses simulation tracing to relatearchitecture specification components to specified test cases, a semantic relation-ship that is extremely difficult to discover statically. However one must always keepin mind that the results of dynamic analysis depend on the test cases used. If thetest set is limited, important specification behavior may be missed. Anotheradvantage of our tool is its user-friendly GUI. This is important since the volumeof information from a large system architectural specification can easily overwhelmthe designer. Our GUI well organizes the information in a comprehensible way.

Most of the x Suds displays have one of the general layouts shown in Figures 5and 6. More details on the usage of the tool can be found at www.xsuds.com orxsuds.argreenhouse.com.

In summary, testing software architectural specifications will become the biggestcost drivers in the software industry as the software development moves up to thespecification level and code is automatically generated. Dynamic analysis tech-niques, based on simulation and using tools like x Suds, offer great potential forkeeping these costs under control and automating code generation, i.e. makingcoding on the specification level more reliable and efficient. Guided testingexecution-path creation based on the basic block dominator concept can help adesigner create efficient test cases quickly. It can also help verification andvalidation cover most of the reachable states.

The x Suds approach combines two testing tools x Regress and xATAC, andsome other tools into one environment with a consistent GUI interface and arobust underlying architecture. Besides extending the tool to the architecturalspecification level, we have also extended it to other programming languages suchas Java. x Suds-SDL continues to evolve; it promises a better graphical GUI

Page 13: χSuds-SDL: A Tool for Testing Software Architecture Specifications

x SUDS-SDL 253

interface. Some internal and external organizations have expressed their interest inusing the tool suite. We hope that the tool suite will make a significant contribu-tion to improve reliability and efficiency of software architectural specificationdevelopment, especially programming on the architectural level in the telecommu-nications industry.

References

Agrawal, H. 1994. Dominators, super blocks, and program coverage. Conf. Rec. 21st Annual ACM( )SIGPLAN-SIGACT Symp. Principles of Programming Languages POPL’94 , Portland, Oregon,

January, pp. 25]34.Agrawal, H., Alberi, J., Li, J.J., et al. 1998. Mining system tests to aid software maintenance, IEEE

Comput. July, pp. 64]73.Atlee, J.M., and Gannon, J. 1993. State-based model checking of event-driven system requirements,

Ž .IEEE Trans. Software Eng. 19 1 : 24]40.Balzer, R.M. 1969. EXDAMS}extendable debugging and monitoring system. Proc. 1969 Spring Joint

Comput. Conf. Montvale, NJ. AFIPS Press, pp. 567]580.Belina, F., Hogrefe, D., and Sarma, A. 1991. SDL with Applications from Protocol Specification,

Prentice-Hall.Ž .Brand, D., and Zafiropulo, P. 1983. On communicating finite-state machines, J. ACM 30 2 : 323]342.

International Telegraph and Telephone Consultative Committee. 1989. SDL user guidelines. Blue Book:IXth Plenary Assembly, Melbourne, Nov. 14]25 1988, Geneva, International TelecommunicationUnion.

Kruchten, P.B. 1995. The 4 q 1 view model of architecture, IEEE Software Nov., pp. 42]50.Li, J.J., and Horgan, J.R. 1998. To maintain a reliable software specification, ISSRE98, pp. 59]69.Luckham, D., Kenney, J., et al. 1995. Specification and analysis of system architecture using rapide,

Ž .IEEE Trans. Software Eng. 21 4 : 336]355.

J. Jenny Li is a Research Scientist in the Information and Computer Science Research Lab of TelcordiaŽ .Technologies formerly Bellcore . She received her Ph.D. from University of Waterloo, Canada in 1996.

She was a Member of the Scientific Staff of the Data Packet Network Division of Bell-NorthernResearch Ltd. before attending University of Waterloo. She has served as a committee member ofseveral international conferences. Her current research interest includes software architecture model-ing, dynamic analysis via simulation, performance, testing, and formal techniques.

J. Robert Horgan is Director and Chief Scientist in Telcordia’s Information and Computer ScienceResearch Lab. He received his Ph.D. in Computer Science from Georgia Tech. He was on the faculty ofComputer Science at the University of Kansas and worked at Bell Labs before joining Bellcore. Hiscurrent work is in software analysis, software reliability, and software testing. He is past program chair

Žof ISSRE’97 and currently co-chair of FDSR’00 the Third Workshop on Formal Descriptions and.Software Reliability.