Increased customer needs and intensified global competition require intelligent and flexible automation. The interaction technology mobile robotics 1 addresses this, so it holds great potential within the industry.This paper presents the concepts, ideas and working principles of the mobile robot "Little Helper" 2 -an ongoing research project at Aalborg University, Denmark, concerning the development of an autonomous and flexible manufacturing assistant.To demonstrate the "Little Helper" concept a fullscale prototype has been built and experiments carried out. Experiences and knowledge gained from this show promising results regarding industrial integration, exploitation and maturation of mobile robotics.
This paper investigates the application potential for the technology-push manufacturing technology (TPMT) autonomous industrial mobile manipulation (AIMM), in order to link the conceptual ideas (academia) to actual manufacturing requirements (industry). The approach is based on the proposed TPMT methodology in a comprehensive industrial case study. More than 566 manufacturing tasks have been analyzed according to three main application areas (logistics, assistance, and service) to find their suitability for the AIMM technology. The conducted TPMT analysis shows that AIMM has great potential within the manufacturing industries. More than two thirds of the analyzed manufacturing tasks are solvable with AIMM within the next few years. The AIMM technology, at its current stage, finds most suitable applications within logistics (e.g., transportation and part feeding), moving toward assistance (e.g., (pre)assembly and machine tending), and in the future more service-minded tasks (e.g., maintenance and cleaning). Based on the identified realworld applications, it is possible to raise the AIMM technology to the next levels of industrial maturation, integration, and commercialization.
Abstract-In a wide range of application areas (e.g. data mining, approximate query evaluation, histogram construction), database sampling has proved to be a powerful technique. It is generally used when the computational cost of processing large amounts of information is extremely high, and a faster response with a lower level of accuracy for the results is preferred. Previous sampling techniques achieve this balance, however, an evaluation of the cost of the database sampling process should be considered. We argue that the performance of current relational database sampling techniques that maintain the data integrity of the sample database is low and a faster strategy needs to be devised. In this paper we propose a very fast sampling method that maintains the referential integrity of the sample database intact. The sampling method targets the production environment of a system under development, that generally consists of large amounts of data computationally costly to analyze. We evaluate our method in comparison with previous database sampling approaches and show that our method produces a sample database at least 300 times faster and with a maximum trade off of 0.5% in terms of sample size error.
Generating synthetic data is useful in multiple application areas (e.g., database testing, software testing). Nevertheless, existing synthetic data generators generally lack the necessary mechanism to produce realistic data, unless a complex set of inputs are given from the user, such as the characteristics of the desired data. An automated and e cient technique is needed for generating realistic data. In this paper, we propose ReX, a novel extrapolation system targeting relational databases that aims to produce a representative extrapolated database given an original one and a natural scaling rate. Furthermore, we evaluate our system in comparison with an existing realistic scaling method, UpSizeR, by measuring the representativeness of the extrapolated database to the original one, the accuracy for approximate query answering, the database size, and their performance. Results show that our solution significantly outperforms the compared method for all considered dimensions.
High-frequency mechanical impact (HFMI) treatment is a well-documented post-weld treatment to improve the fatigue life of welds. Treatment of the weld toe must be performed by a skilled operator due to the curved and inconsistent nature of the weld toe to ensure an acceptable quality. However, the process is characterised by noise and vibrations; hence, manual treatment should be avoided for extended periods of time. This work proposes an automated system for applying robotised 3D scanning to perform post-weld treatment and quality inspection of linear welds. A 3D scan of the weld is applied to locally determine the gradient and curvature across the weld surface to locate the weld toe. Based on the weld toe position, an adaptive robotic treatment trajectory is generated that accurately follows the curvature of the weld toe and adapts tool orientation to the weld profile. The 3D scan is reiterated after the treatment, and the surface gradient and curvature are further applied to extract the quantitative measures of the treatment, such as weld toe radius, indentation depth, and groove deviation and width. The adaptive robotic treatment is compared experimentally to manual and linear robotic treatment. This is done by treating 600-mm weld toe of each treatment type and evaluating the quantitative measures using the developed system. The results showed that the developed system reduced the overall treatment variance by respectively 26.6% and 31.9%. Additionally, a mean weld toe deviation of 0.09 mm was achieved; thus, improving process stability yet minimising human involvement.
Abstract. Database sampling has become a popular approach to handle large amounts of data in a wide range of application areas such as data mining or approximate query evaluation. Using database samples is a potential solution when using the entire database is not cost-effective, and a balance between the accuracy of the results and the computational cost of the process applied on the large data set is preferred. Existing sampling approaches are either limited to specific application areas, to single table databases, or to random sampling. In this paper, we propose CoDS: a novel sampling approach targeting relational databases that ensures that the sample database follows the same distribution for specific fields as the original database. In particular it aims to maintain the distribution between tables. We evaluate the performance of our algorithm by measuring the representativeness of the sample with respect to the original database. We compare our approach with two existing solutions, and we show that our method performs faster and produces better results in terms of representativeness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.