Over more than two decades, numerous variability modeling techniques have been introduced in academia and industry. However, little is known about the actual use of these techniques. While dozens of experience reports on software product line engineering exist, only very few focus on variability modeling. This lack of empirical data threatens the validity of existing techniques, and hinders their improvement. As part of our effort to improve empirical understanding of variability modeling, we present the results of a survey questionnaire distributed to industrial practitioners. These results provide insights into application scenarios and perceived benefits of variability modeling, the notations and tools used, the scale of industrial models, and experienced challenges and mitigation strategies
Abstract. Ambient Assisted Living is currently one of the important research and development areas, where accessibility, usability and learning plays a major role and where future interfaces are an important concern for applied engineering. The general goal of ambient assisted living solutions is to apply ambient intelligence technology to enable people with specific demands, e.g. handicapped or elderly, to live in their preferred environment longer. Due to the high potential of emergencies, a sound emergency assistance is required, for instance assisting elderly people with comprehensive ambient assisted living solutions sets high demands on the overall system quality and consequently on software and system engineering -user acceptance and support by various userinterfaces is an absolute necessity. In this article, we present an Assisted Living Laboratory that is used to train elderly people to handle modern interfaces for Assisted Living and evaluate the usability and suitability of these interfaces in specific situations, e.g., emergency cases.
Abstract-Many companies develop software product linescollections of similar products-by cloning and adapting artifacts of existing product variants. Transforming such cloned product variants into a "single-copy" software product line representation is considered an important software re-engineering activity, as reflected in numerous tools and methodologies available. However, development practices of companies that use cloning to implement product lines have not been systematically studied. This lack of empirical knowledge threatens the validity and applicability of approaches supporting the transformation, and impedes adoption of advanced solutions for systematic software reuse. It also hinders the attempts to improve the solutions themselves.We address this gap with an empirical study conducted to investigate the cloning culture in six industrial software product lines realized via code cloning. Our study investigates the processes, and the perceived advantages and disadvantages of the approach. We observe that cloning, while widely discouraged in literature, is still perceived as a favorable and natural reuse approach by the majority of practitioners in the studied companies. This is mainly due to its benefits such as simplicity, availability and independence of developers. Based on our observations, we outline issues preventing the adoption of systematic software reuse approaches, and identify future research directions.
Estimating the time of delivery is of high clinical importance because pre- and postterm deviations are associated with complications for the mother and her offspring. However, current estimations are inaccurate. As pregnancy progresses toward labor, major transitions occur in fetomaternal immune, metabolic, and endocrine systems that culminate in birth. The comprehensive characterization of maternal biology that precedes labor is key to understanding these physiological transitions and identifying predictive biomarkers of delivery. Here, a longitudinal study was conducted in 63 women who went into labor spontaneously. More than 7000 plasma analytes and peripheral immune cell responses were analyzed using untargeted mass spectrometry, aptamer-based proteomic technology, and single-cell mass cytometry in serial blood samples collected during the last 100 days of pregnancy. The high-dimensional dataset was integrated into a multiomic model that predicted the time to spontaneous labor [R = 0.85, 95% confidence interval (CI) [0.79 to 0.89], P = 1.2 × 10−40, N = 53, training set; R = 0.81, 95% CI [0.61 to 0.91], P = 3.9 × 10−7, N = 10, independent test set]. Coordinated alterations in maternal metabolome, proteome, and immunome marked a molecular shift from pregnancy maintenance to prelabor biology 2 to 4 weeks before delivery. A surge in steroid hormone metabolites and interleukin-1 receptor type 4 that preceded labor coincided with a switch from immune activation to regulation of inflammatory responses. Our study lays the groundwork for developing blood-based methods for predicting the day of labor, anchored in mechanisms shared in preterm and term pregnancies.
The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments.
Almost every sufficiently complex software system today is configurable. Conditional compilation is a simple variability-implementation mechanism that is widely used in open-source projects and industry. Especially, the C preprocessor (cpp) is very popular in practice, but it is also gaining (again) interest in academia. Although there have been several attempts to understand and improve cpp, there is a lack of understanding of how it is used in open-source and industrial systems and whether different usage patterns have emerged. The background is that much research on configurable systems and product lines concentrates on open-source systems, simply because they are available for study in the first place. This leads to the potentially problematic situation that it is unclear whether the results obtained from these studies are transferable to industrial systems. We aim at lowering this gap by compar- * This author published previous work as Janet Feigenspan. ing the use of cpp in open-source projects and industry-especially from the embedded-systems domain-based on a substantial set of subject systems and well-known variability metrics, including size, scattering, and tangling metrics. A key result of our empirical study is that, regarding almost all aspects we studied, the analyzed open-source systems and the considered embedded systems from industry are similar regarding most metrics, including systems that have been developed in industry and made open source at some point. So, our study indicates that, regarding cpp as variability-implementation mechanism, insights, methods, and tools developed based on studies of open-source systems are transferable to industrial systems-at least, with respect to the metrics we considered.
The dense network of interconnected cellular signalling responses that are quantifiable in peripheral immune cells provides a wealth of actionable immunological insights. Although high-throughput single-cell profiling techniques, including polychromatic flow and mass cytometry, have matured to a point that enables detailed immune profiling of patients in numerous clinical settings, the limited cohort size and high dimensionality of data increase the possibility of false-positive discoveries and model overfitting. We introduce a generalizable machine learning platform, the immunological Elastic-Net (iEN), which incorporates immunological knowledge directly into the predictive models. Importantly, the algorithm maintains the exploratory nature of the high-dimensional dataset, allowing for the inclusion of immune features with strong predictive capabilities even if not consistent with prior knowledge. In three independent studies our method demonstrates improved predictions for clinically relevant outcomes from mass cytometry data generated from whole blood, as well as a large simulated dataset. The iEN is available under an open-source licence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.