Abstract. Biomedical research requires deep domain expertise to perform analyses of complex data sets, assisted by mathematical expertise provided by data scientists who design and develop sophisticated methods and tools. Such methods and tools not only require preprocessing of the data, but most of all a meaningful input selection. Usually, data scientists do not have sufficient background knowledge about the origin of the data and the biomedical problems to be solved, consequently a doctor-in-the-loop can be of great help here. In this paper we revise the viability of integrating an analysis guided visualization component in an ontology-guided data infrastructure, exemplified by the principal component analysis. We evaluated this approach by examining the potential for intelligent support of medical experts on the case of cerebral aneurysms research.
In his PhD thesis [1], Buchberger introduced the notion of Gröbner bases and gave the first algorithm for computing them. Since then, extensive research has been done in order to reduce the complexity of the computation. But nevertheless, even for small examples the computation sometimes does not terminate in reasonable time.There are basically two approaches for computing a Gröbner basis. The first is the one pursued by the Buchberger algorithm: We start from the initial set F , execute certain reduction steps (consisting of multiplication of polynomials by terms -called shifts -and subtraction of polynomials) and due to Buchberger's theorem, which says that the computation is finished if all the s-polynomials reduce to zero, we know that after finitely many iterations of this procedure we obtain a Gröbner basis of the ideal generated by F . The second approach is to start from F , execute certain shifts of the initial polynomials in F , arrange them as rows in a matrix, triangularize this matrix and from the resulting matrix extract a Gröbner basis.In project DK1 of the Doctoral Program, which was proposed by Buchberger, we pursue the second approach and seek to improve the theory in order to speed up the Gröbner bases computation. This approach has been studied a couple of times in the past, but never thoroughly. The immediate question is: Does there exist a finite set of shifts such that a triangularization of the matrix built by these shifts yields a Gröbner basis and, if so, how can we construct these shifts? We give first results in answering this question. In the following, let K be a field.In the univariate case, Gröbner bases computation specializes to gcd computation. In [3] (see also [4] for a good overview on this topic), Habicht establishes a connection between the computation of polynomial remainder sequences and linear algebra. More specifically, the problem of finding a gcd of two polynomials f, g ∈ K[x] with degrees m and n, respectively, where m ≥ n, can be solved by triangularizing the matrix
Processing and exploring large quantities of electronic data is often a particularly interesting but yet challenging task. Both the lack of statistical and mathematical skills and the missing know-how of handling masses of (health) data constitute high barriers for profound data exploration -especially when performed by domain experts. This paper presents guided visual pattern discovery, by taking the wellestablished data mining method Principal Component Analysis as an example. Without guidance, the user has to be conscious about the reliability of computed results at any point during the analysis (GIGOprinciple). In the course of the integration of principal component analysis into an ontology-guided research infrastructure, we include a guidance system supporting the user through the separate analysis steps and we introduce a quality measure, which is essential for profound research results.
This paper presents AUGURY, an application for the analysis of monitoring data from computers, servers or cloud infrastructures. The analysis is based on the extraction of patterns and trends from historical data, using elements of time-series analysis. The purpose of AUGURY is to aid a server administrator by forecasting the behaviour and resource usage of specific applications and in presenting a status report in a concise manner. AUGURY provides tools for identifying network traffic congestion and peak usage times, and for making memory usage projections. The application data processing specialises in two tasks: the parametrisation of the memory usage of individual applications and the extraction of the seasonal component from network traffic data. AUGURY uses a different underlying assumption for each of these two tasks. With respect to the memory usage, a limited number of single-valued parameters are assumed to be sufficient to parameterize any application being hosted on the server. Regarding the network traffic data, long-term patterns, such as hourly or daily exist and are being induced by work-time schedules and automatised administrative jobs. In this paper, the implementation of each of the two tasks is presented, tested using locally-generated data, and applied to data from weather forecasting applications hosted on a web server. This data is used to demonstrate the insight that AUGURY can add to the monitoring of server and cloud infrastructures. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.