As software services become the dominant platform for enterprise computing and B2B/B2C applications, testing their correctness and dependability assumes ever more importance. However, unlike the languages used to define and realize them, the languages used to test service-based systems have changed little over recent years. Today, tests for services and service-oriented architectures are still typically written using approaches such as xUnit or Testing and Test Control Notation (TTCN-3) developed for traditional software. While programmatic approaches allow the full power of object-oriented programming to be used to define tests, they are only intelligible to IT experts. Modelbased test representation techniques such as the Unified Modeling Language (UML) testing profile and the TTCN-3 visualization features are understandable by more stakeholders but provide only partial descriptions of tests and do not scale well beyond simple algorithms. In this paper we present a new approach to software service testing which combines the expressive power of tabular tests specification techniques like Framework for Integrated Test (FIT) with programmatic techniques like xUnit and TTCN-3. The new approach also integrates test definition with test result specification and evaluation. This allows non-IT experts to define and run tests and integrates testing more tightly into the service-oriented development process.
Automating software testing can significantly reduce the time and effort required to assure the quality of software systems, and over recent years significant strides have been made in test automation techniques. However, one aspect of software testing that has always resisted full automation is the determination of the expected results for given system states and input values -the so called "oracle problem". Fortunately, the recent advent of a new generation of software search engines containing millions of reusable software artifacts offers an elegant solution to this dilemma. Once a search engine is able to deliver multiple results that conform to a given specification (by searching for and adapting preexisting components), multi-version testing of software with "harvested" oracles becomes a feasible alternative to manual oracle definition. In this paper we present an approach to Search-Enhanced Testing with a focus on the discovery of discrepancies between the results returned by harvested test oracles and a Component Under Test for randomly generated test invocations. Our current research focuses on validating the hypothesis that human test engineers will find more defects when analyzing such automatically discovered discrepancies than when developing test cases using traditional coverage criteria.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.