Practical disassembly process planning is extremely important for efficient material recycling and components reuse. The research work for the process planning in literature focuses on the generation of optimal sequences based on the predictive information of products. The used products, unfortunately, exhibit high uncertainty since products may experience very different conditions during their use stage. The indeterminate characteristics associated to used products often makes the predetermined plan unrealistic. Their disassembly process has to be decided dynamically adaptive to the products' specific status. To be able to deal with uncertainty in a dynamic decision making process, this paper presents a fuzzy reasoning Petri net (FRPN) model to represent related decision making rules in disassembly process. Using the proposed fuzzy reasoning algorithm based on the FRPN model, the multicriterion disassembly rules can be considered in the parallel way to make the decision automatically and quickly. Instead of producing the disassembly sequences before disassembling a whole product, the proposed method makes intelligent decisions based on dynamically updated status of components in the product at each disassembly step. Therefore, it is adaptive to the changes that arise during the process. Finally, an example is used to illustrate the application of the proposed methodology.
Object recognition in real-world environments is one of the fundamental and key tasks in computer vision and robotics communities. With the advanced sensing technologies and low-cost depth sensors, the high-quality RGB and depth images can be recorded synchronously, and the object recognition performance can be improved by jointly exploiting them. RGB-D-based object recognition has evolved from early methods that using hand-crafted representations to the current state-of-the-art deep learning-based methods. With the undeniable success of deep learning, especially convolutional neural networks (CNNs) in the visual domain, the natural progression of deep learning research points to problems involving larger and more complex multimodal data. In this paper, we provide a comprehensive survey of recent multimodal CNNs (MMCNNs)-based approaches that have demonstrated significant improvements over previous methods. We highlight two key issues, namely, training data deficiency and multimodal fusion. In addition, we summarize and discuss the publicly available RGB-D object recognition datasets and present a comparative performance evaluation of the proposed methods on these benchmark datasets. Finally, we identify promising avenues of research in this rapidly evolving field. This survey will not only enable researchers to get a good overview of the state-of-the-art methods for RGB-D-based object recognition but also provide a reference for other multimodal machine learning applications, e.g., multimodal medical image fusion, audiovisual speech recognition, and multimedia retrieval and generation.
A random three-dimensional (3D) porous medium can be reconstructed from a two-dimensional (2D) image by reconstructing an image from the original 2D image, and then repeatedly using the result to reconstruct the next 2D image. The reconstructed images are then stacked together to generate the entire reconstructed 3D porous medium. To perform this successfully, a very important issue must be addressed, i.e., controlling the continuity and variability among adjacent layers. Continuity and variability, which are consistent with the statistics characteristic of the training image (TI), ensure that the reconstructed result matches the TI. By selecting the number and location of the sampling points in the sampling process, the continuity and variability can be controlled directly, and thus the characteristics of the reconstructed image can be controlled indirectly. In this paper, we propose and develop an original sampling method called three-step sampling. In our sampling method, sampling points are extracted successively from the center of 5×5 and 3×3 sampling templates and the edge area based on a two-point correlation function. The continuity and variability of adjacent layers were considered during the three steps of the sampling process. Our method was tested on a Berea sandstone sample, and the reconstructed result was compared with the original sample, using tests involving porosity distribution, the lineal path function, the autocorrelation function, the pore and throat size distributions, and two-phase flow relative permeabilities. The comparison indicates that many statistical characteristics of the reconstructed result match with the TI and the reference 3D medium perfectly.
Image segmentation, which has become a research hotspot in the field of image processing and computer vision, refers to the process of dividing an image into meaningful and non-overlapping regions, and it is an essential step in natural scene understanding. Despite decades of effort and many achievements, there are still challenges in feature extraction and model design. In this paper, we review the advancement in image segmentation methods systematically. According to the segmentation principles and image data characteristics, three important stages of image segmentation are mainly reviewed, which are classic segmentation, collaborative segmentation, and semantic segmentation based on deep learning. We elaborate on the main algorithms and key techniques in each stage, compare, and summarize the advantages and defects of different segmentation models, and discuss their applicability. Finally, we analyze the main challenges and development trends of image segmentation techniques.
Firefly algorithm (FA) is a new meta-heuristic optimisation algorithm that mimics the social behaviour of fireflies flying in the tropical and temperate summer sky. In this study, a novel application of FA is presented as it is applied to solve tracking problem. A general optimisation-based tracking architecture is proposed and the parameters' sensitivity and adjustment of the FA in tracking system are studied. Experimental results show that the FA-based tracker can robustly track an arbitrary target in various challenging conditions. The authors compare the speed and accuracy of the FA with three typical tracking algorithms including the particle filter, meanshift and particle swarm optimisation. Comparative results show that the FA-based tracker outperforms the other three trackers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.