Because of the distributed and collaborative nature of free / open source software (FOSS) projects, the development effort invested in a project is usually unknown, even after the software has been released. However, this information is becoming of major interest, especially -but not onlybecause of the growth in the number of companies for which FOSS has become relevant for their business strategy. In this paper we present a novel approach to estimate effort by considering data from source code management repositories. We apply our model to the OpenStack project, a FOSS project with more than 1,000 authors, in which several tens of companies cooperate. Based on data from its repositories and together with the input from a survey answered by more than 100 developers, we show that the model offers a simple, but sound way of obtaining software development estimations with bounded margins of error.
Recently, the transmission dynamics of the Human Papillomavirus (HPV) has been studied. In previous works, we have designed and implemented a computational model (agent-based simulation model) where the contagion of the HPV is described on a network of lifetime sexual partners. The run of a single simulation of this computational model, composed of a network with 500 000 nodes, takes about one hour and a half. In addition to set an adequate model, finding out the model parameters that best fit the proposed model to the available data of prevalence is a crucial goal. Taking into account that the necessary number of simulations to perform the calibration of the model may be very high, the aforementioned goal may become unaffordable. In this paper, we present a procedure to fit the proposed HPV model to the available data and the design of an asynchronous version of the Particle Swarm Optimization (PSO) algorithm adapted to the distributed computing environment. In the process, the number of particles used in PSO should be set carefully looking for a compromise between quality of the solutions and computation time. Another feature of the procedure presented here is that we want to capture the intrinsic uncertainty in the data (data come from a survey) when calibrating the model. To do so, we also propose the design of an algorithm to select the model parameter sets obtained during the calibration that best capture the data uncertainty.
On advanced stages of the disease, diabetic patients have to inject insulin doses to maintain blood glucose levels inside of a healthy range. The decision of how much insulin is injected implies somehow to predict the level of glucose they will have after a certain time. Due to the sudden changes in the glucose levels, their estimation is a very difficult task. If we were able to give reliable estimations in advance, it would facilitate the process of taking therapeutic decisions to control the disease and improve the health of the patient. In this work, we present a technique to estimate the glucose level of a diabetic patient, capturing the measurement errors produced by continuous glucose monitoring systems (CGMSs), smart devices that measure glucose levels. To do that, we will use a model of glucose dynamics and we calibrate it with the aim to capture the glucose level data of the patient in a time interval of 30 minutes and the uncertainty given by the glucose measurement. Then, we use the calibrated parameters to predict the levels of glucose over the next 15 minutes. Repeating this procedure every 15 minutes, we are able to give short‐term accurate predictions.
Type 1 Diabetes patients have to control their blood glucose levels using insulin therapy. Numerous factors (such as carbohydrate intake, physical activity, time of day, etc.) greatly complicate this task. In this article we propose a modeling method that will allow us to make predictions of blood glucose level evolution with a time horizon of 24 hours. This may allow the adjustment of insulin doses in advance and could help to improve the living conditions of diabetes patients. Our approach starts from a system of finite difference equations that characterizes the interaction between insulin and glucose (in the field, this is known as a minimal model). This model has several parameters whose values vary widely depending on patient characteristics and time. Thus, in the first phase of our strategy, We will enrich the patient's historical data by adding white Gaussian noise, which will allow us to perform a probabilistic fitting with a 95% confidence interval. Then, the model's parameters are adjusted based on the history of each patient using a genetic algorithm and dividing the day into 12 time intervals. In the final stage, we will perform a whole-day forecast from an ensemble of the models fitted in the previous phase. Th e validity of our strategy will be tested using the Parkers' error grid analysis. Our experimental results based on data from real diabetic patients show that this technique is capable of robust predictions that take into account all the uncertainty associated with the interaction between insulin and glucose.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.