Log files are commonly inspected by system administrators and developers to detect suspicious behaviors and diagnose failure causes. Since size of log files grows fast, thus making manual analysis impractical, different automatic techniques have been proposed to analyze log files. Unfortunately, accuracy and effectiveness of these techniques are often limited by the unstructured nature of logged messages and the variety of data that can be logged.This paper presents a technique to automatically analyze log files and retrieve important information to identify failure causes. The technique automatically identifies dependencies between events and values in logs corresponding to legal executions, generates models of legal behaviors and compares log files collected during failing executions with the generated models to detect anomalous event sequences that are presented to users. Experimental results show the effectiveness of the technique in supporting developers and testers to identify failure causes.
In safety critical domains, system test cases are often derived from functional requirements in natural language (NL) and traceability between requirements and their corresponding test cases is usually mandatory. The definition of test cases is therefore time-consuming and error prone, especially so given the quickly rising complexity of embedded systems in many critical domains. Though considerable research has been devoted to automatic generation of system test cases from NL requirements, most of the proposed approaches require significant manual intervention or additional, complex behavioral modelling. This significantly hinders their applicability in practice.In this paper, we propose Use Case Modelling for System Tests Generation (UMTG), an approach that automatically generates executable system test cases from use case specifications and a domain model, the latter including a class diagram and constraints. Our rationale and motivation are that, in many environments, including that of our industry partner in the reported case study, both use case specifications and domain modelling are common and accepted practice, whereas behavioural modelling is considered a difficult and expensive exercise if it is to be complete and precise. In order to extract behavioral information from use cases and enable test automation, UMTG employs Natural Language Processing (NLP), a restricted form of use case specifications, and constraint solving.
This article follows up on Lionel Briand's 2012 Sounding Board article on the significant disconnect between research and industrial needs in software engineering [1]. Here, we argue that for software engineering research to increase its impact and steer our community toward a more successful future, it must change. Specifically, we see the need to foster context-driven research. By that, we mean research focused on problems defined in collaboration with industrial partners and driven by concrete needs in specific domains and development projects. By analyzing publications from the top software engineering research venues, anyone could easily conclude that only a small proportion of the papers stem from such research. Context-driven research doesn't try to frame a general problem and devise universal solutions. Rather, it makes clear working assumptions, given a precise context, and relies on trade-offs that make sense in that context to achieve practicality and scalability. This research paradigm applies to any topic in software engineering and isn't meant to highlight any particular research area. Because context-driven research doesn't produce results that generalize easily to any arbitrary software development environment, the following questions arise: Does it have value? How so? Is it (engineering) science? This is what we discuss in the rest of this article.
Abstract-Despite the recent advances in test generation, fully automatic software testing remains a dream: Ultimately, any generated test input depends on a test oracle that determines correctness, and, except for generic properties such as "the program shall not crash", such oracles require human input in one form or another. CrowdSourcing is a recently popular technique to automate computations that cannot be performed by machines, but only by humans. A problem is split into small chunks, that are then solved by a crowd of users on the Internet. In this paper we investigate whether it is possible to exploit CrowdSourcing to solve the oracle problem: We produce tasks asking users to evaluate CrowdOraclesassertions that reflect the current behavior of the program. If the crowd determines that an assertion does not match the behavior described in the code documentation, then a bug has been found. Our experiments demonstrate that CrowdOracles are a viable solution to automate the oracle problem, yet taming the crowd to get useful results is a difficult task.
Testing the correct behaviour of data processing systems in the presence of faulty data is extremely expensive. The data structures processed by these systems are often complex, with many data fields and multiple constraints among them. Software engineers, in charge of testing these systems, have to handcraft complex data files or databases, while ensuring compliance with the multiple constraints to prevent the generation of trivially invalid inputs. In addition, assessing test results often means analysing complex output and log data. Though many techniques have been proposed to automatically test systems based on models, little exists in the literature to support the testing of systems where the complexity is in the data consumed in input or produced in output, with complex constraints between them. In particular, such systems often need to be tested with the presence of faults in the input data, in order to assess the robustness and behaviour of the system in response to such faults. This paper presents an automated test technique that relies upon six generic mutation operators to automatically generate faulty data. The technique receives two inputs: field data and a data model, i.e. a UML class diagram annotated with stereotypes and OCL constraints. The annotated class diagram is used to tailor the behaviour of the generic mutation operators to the fault model that is assumed for the system under test and the environment in which it is deployed.Empirical results obtained with a large data acquisition system in the satellite domain show that our approach can successfully automate the generation of test suites that achieve slightly better instruction coverage than manual testing based on domain expertise.Further, the technique automates test oracles that, in our 978-1-4799-7125-1/15/$31.00 ©2015 IEEE
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.