What makes test programs similar in microservices applications?
The emergence of microservices architecture calls for novel methodologies and technological frameworks that support the design, development, and maintenance of applications structured according to this new architectural style. In this paper, we consider the issue of designing suitable strategies for...
Saved in:
Published in | The Journal of systems and software Vol. 201; p. 111674 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.07.2023
|
Subjects | |
Online Access | Get full text |
ISSN | 0164-1212 1873-1228 |
DOI | 10.1016/j.jss.2023.111674 |
Cover
Summary: | The emergence of microservices architecture calls for novel methodologies and technological frameworks that support the design, development, and maintenance of applications structured according to this new architectural style. In this paper, we consider the issue of designing suitable strategies for the governance of testing activities within the microservices paradigm. We focus on the problem of discovering implicit relations between test programs that help to avoid re-running all the available test suites each time one of its constituents evolves. We propose a dynamic analysis technique and its supporting framework that collects information about the invocations of local and remote APIs. Information on test program execution is obtained in two ways: instrumenting the test program code or running a symbolic execution engine. The extracted information is processed by a rule-based automated reasoning engine, which infers implicit similarities among test programs. We show that our analysis technique can be used to support the reduction of test suites, and therefore has good application potential in the context of regression test optimisation. The proposed approach has been validated against two real-world microservices applications.
•Support the design of regression testing strategies in microservices applications.•The dynamic analysis of test programs discloses their implicit relations.•Test programs’ information is collected by means of concrete and symbolic execution.•Similarities across test programs emerge by means of rule-based automated reasoning.•Criteria fostering effective decisions on which test programs shall be considered. |
---|---|
ISSN: | 0164-1212 1873-1228 |
DOI: | 10.1016/j.jss.2023.111674 |