We present a novel two-pass framework for counting the number of people in an environment, where multiple cameras provide different views of the subjects. By exploiting the complementary information captured by the cameras, we can transfer knowledge between the cameras to address the difficulties of people counting and improve the performance. The contribution of this paper is threefold. First, normalizing the perspective of visual features and estimating the size of a crowd are highly correlated tasks. Hence, we treat them as a joint learning problem. The derived counting model is scalable and it provides more accurate results than existing approaches. Second, we introduce an algorithm that matches groups of pedestrians in images captured by different cameras. The results provide a common domain for knowledge transfer, so we can work with multiple cameras without worrying about their differences. Third, the proposed counting system is comprised of a pair of collaborative regressors. The first one determines the people count based on features extracted from intracamera visual information, whereas the second calculates the residual by considering the conflicts between intercamera predictions. The two regressors are elegantly coupled and provide an accurate people counting system. The results of experiments in various settings show that, overall, our approach outperforms comparable baseline methods. The significant performance improvement demonstrates the effectiveness of our two-pass regression framework.
The complete genomes of a skunkpox, volepox, and raccoonpox virus were sequenced and annotated. Phylogenetic analysis of these genomes indicates that although these viruses are all orthopoxviruses, they form a distinct clade to the other known species. This supports the ancient divergence of the North American orthopoxviruses from other members of the orthopoxviruses. Only two open reading frames appear to be unique to this group of viruses, but a relatively small number of insertions/deletions contribute to the varied gene content of this clade. The availability of these genomes will help determine whether skunkpox and volepox viruses share the characteristics that make raccoonpox a useful vaccine vector.
We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.
Recent interesting issues in video inpainting are defect removal and object removal. We take one more step to replace the removed objects in a video sequence by implanting objects from another video. Before implant, we improve an exemplar-based image inpainting algorithm by using a new patch matching strategy which incorporates edge properties. The data term used in a priority computation of candidate patches is also redefined. We take varieties of temporal continuations of foreground and background into consideration. A motion compensated inpainting procedure is then proposed. The inpainted video backgrounds are visually pleasant with smooth transitions. A simple tracking algorithm is then used to produce a foreground video, which is implanted into the inpainted background video. Our results are available at http://www.mine.tku.edu.tw/inpainting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.