A multi-scale and multi-orientation image retrieval method based on rotation-invariant texture features SCIENCE CHINA Information Sciences 54, 732 (2011); Multi-scale local features based on anisotropic heat diffusion and global eigen-structure SCIENCE CHINA Information Sciences 56, 110901 (2013); A multi-scale and multi-orientation image retrieval method based on rotation-invariant texture features SCIENTIA SINICA Informationis 41, 283 (2011); Fast-armored target detection based on multi-scale representation and guided anchor Defence Technology 16, 922 (2020); SDOG-based multi-scale 3D modeling and visualization on global lithosphere SCIENCE CHINA Earth Sciences 55, 1012 (2012);. RESEARCH PAPER. SCIENCE CHINA Information Sciences
Intelligent manufacturing is the trend of the steel industry. A cyber-physical system oriented steel production scheduling system framework is proposed. To make up for the difficulty of dynamic scheduling of steel production in a complex environment and provide an idea for developing steel production to intelligent manufacturing. The dynamic steel production scheduling model characteristics are studied, and an ontology-based steel cyber-physical system production scheduling knowledge model and its ontology attribute knowledge representation method are proposed. For the dynamic scheduling, the heuristic scheduling rules were established. With the method, a hyper-heuristic algorithm based on genetic programming is presented. The learning-based high-level selection strategy method was adopted to manage the low-level heuristic. An automatic scheduling rule generation framework based on genetic programming is designed to manage and generate excellent heuristic rules and solve scheduling problems based on different production disturbances. Finally, the performance of the algorithm is verified by a simulation case.
Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently, scene understanding from a single image has made great progress. The estimated geometry, semantic labels and intrinsic components provide mostly coarse information, and are not accurate enough to re-render the whole scene. However, carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image, without any pre-recorded 3D geometry, reflectance, illumination acquisition equipment or imaging information of the image.
Stitching motions in multiple videos into a single video scene is a challenging task in current video fusion and mosaicing research and film production. In this paper, we present a novel method of video motion stitching based on the similarities of trajectory and position of foreground objects. First, multiple video sequences are registered in a common reference frame, whereby we estimate the static and dynamic backgrounds, with the former responsible for distinguishing the foreground from the background and the static region from the dynamic region, and the latter functioning in mosaicing the warped input video sequences into a panoramic video. Accordingly, the motion similarity is calculated by reference to trajectory and position similarity, whereby the corresponding motion parts are extracted from multiple video sequences. Finally, using the corresponding motion parts, the foregrounds of different videos and dynamic backgrounds are fused into a single video scene through Poisson editing, with the motions involved being stitched together. Our major contributions are a framework of multiple video mosaicing based on motion similarity and a method of calculating motion similarity from the trajectory similarity and the position similarity. Experiments on everyday videos show that the agreement of trajectory and position similarities with the real motion similarity plays a decisive role in determining whether two motions can be stitched. We acquire satisfactory results for motion stitching and video mosaicing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.