Abstract-With the growing size and complexity of software applications, research in the area of architecture-based software reliability analysis has gained prominence. The purpose of this paper is to provide an overview of the existing research in this area, critically examine its limitations, and suggest ways to address the identified limitations.
Prevalent approaches to characterize the behavior of monolithic applications are inappropriate to model modern software systems which are heterogeneous, and are built using a combination of components picked off the shelf, those developed in-house and those developed contractually. Development of techniques to characterize the behavior of such component-based software systems based on their architecture is then absolutely essential. Earlier efforts in the area of architecture-based analysis have focused on the development of composite models which are quite cumbersome due to their inherent largeness and stiffness. In this paper we develop an accurate hierarchical model to predict the performance and reliability of component-based software systems based on their architecture. This model accounts for the variance of the number of visits to each module, and thus provides predictions closer to those provided by a composite model. The approach developed in this paper enables the identification of performance and reliability bottlenecks. We also develop expressions to analyze the sensitivity of the performance and reliability predictions to the changes in the parameters of individual modules. In addition, we demonstrate how the hierarchical model could be used to assess the impact of changes in the workload on the performance and reliability of the application. We illustrate the performance and reliability prediction as well as sensitivity analysis techniques with examples.
Summary and ConclusionsTraditional approaches to software reliability modeling are black box based, that is, the software system is considered as a whole and only its interactions with the outside world are modeled without looking into its internal structure. The black box approaches are adequate to characterize the reliability of monolithic, custom, built-to-specification software applications. However, with the widespread use of object oriented systems design and development, the use of component-based software development is on the rise. Software systems are developed in a heterogeneous (multiple teams in different environments) fashion, and hence it may be inappropriate to model the overall failure process of such systems using one of the several software reliability growth models (black box approach). Predicting the reliability of a software system based on its architecture, and the failure behavior of its components is thus essential. Most of the research efforts in predicting the reliability of a software system based on its architecture have been focused on developing analytical or state-based models. However, the development of state-based models has been mostly ad hoc with little or no effort devoted towards establishing a unifying framework which compares and contrasts these models. Also, to the best of our knowledge no attempt has been made to offer an insight into how these models might be applied to real software applications. This paper proposes a unifying framework for state-based models for architecture-based software reliability prediction. We outline the information required for the specification of state-based models to predict application reliability. We also propose a systematic classification scheme for state-based approaches to reliability prediction. The scheme classifies the state-based models according to three dimensions, namely, the model used to represent the architecture of the software, model used to represent the failure behavior of the components of the application, and the method of analysis. We place the existing models in the literature in appropriate categories according to the above three dimensions, and then present an exhaustive analysis of those models in which the architecture of the application is represented either as a discrete time Markov chain (DTMC) or a continuous time Markov 1 chain (CTMC). We illustrate the DTMC-and CTMC-based models using examples. We also provide a detailed discussion regarding the input parameters required by each model, and how these parameters may be estimated from the different software artifacts. Depending on the software artifacts that are available during a given phase of the software life cycle, and the parameters that can be estimated from these artifacts, we provide guidance regarding which model may be appropriate for predicting the reliability of an application during each phase of its life cycle.
Most modern Web robots that crawl the Internet to support value-added services and technologies possess sophisticated data collection and analysis capabilities. Some of these robots, however, may be ill-behaved or malicious, and hence, may impose a significant strain on a Web server. It is thus necessary to detect Web robots in order to block undesirable ones from accessing the server. Such detection is also essential to ensure that the robot traffic is considered appropriately in the performance and capacity planning of Web servers. Despite a variety of Web robot detection techniques, there is no consensus regarding a single technique, or even a specific "type" of technique, that performs well in practice. Therefore, to aid in the development of a practically applicable robot detection technique, this survey presents a critical analysis and comparison of the prevalent detection approaches. We propose a framework to classify the existing detection techniques into four categories based on their underlying detection philosophy. We compare the different classes to gain insights into those characteristics that make up an effective robot detection scheme. Finally, we discuss why the contemporary techniques fail to offer a general solution to the robot detection problem and propose a set of key ingredients necessary for strong Web robot detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.