The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable intuitive interactions. Therefore, we incorporate the latest AR and CV algorithms into a Virtual English Classroom, called VECAR, to promote immersive and interactive language learning. By wearing a pair of mobile computing glasses, users can interact with virtual contents in a three-dimensional space by using intuitive free-hand gestures. We design three cultural learning activities that introduce students to authentic cultural products and new cultural practices, and allow them to examine various cultural perspectives. The objectives of the VECAR are to make cultural and language learning appealing, improve cultural learning effectiveness, and enhance interpersonal communication between teachers and students.
Vision‐based traffic surveillance plays an important role in traffic management. However, outdoor illuminations, the cast shadows and vehicle variations often create problems for video analysis and processing. Thus, the authors propose a real‐time cost‐effective traffic monitoring system that can reliably perform traffic flow estimation and vehicle classification at the same time. First, the foreground is extracted using a pixel‐wise weighting list that models the dynamic background. Shadows are discriminated utilising colour and edge invariants. Second, the foreground on a specified check‐line is then collected over time to form a spatial–temporal profile image. Third, the traffic flow is estimated by counting the number of connected components in the profile image. Finally, the vehicle type is classified according to the size of the foreground mask region. In addition, several traffic measures, including traffic velocity, flow, occupancy and density, are estimated based on the analysis of the segmentation. The availability and reliability of these traffic measures provides critical information for public transportation monitoring and intelligent traffic control. Since the proposed method only process a small area close to the check‐line to collect the spatial–temporal profile for analysis, the complete system is much more efficient than existing visual traffic flow estimation methods.
A computer vision-based system using images from an airborne aircraft can increase flight safety by aiding the pilot to detect obstacles in the flight path so as to avoid mid-air collisions. Such a system fits naturally with the development of an external vision system proposed by NASA for use in high-speed civil transport aircraft with limited cockpit visibility. The detection techniques should provide high detection probability for obstacles that can vary from subpixels to a few pixels in size, while maintaining a low false alarm probability in the presence of noise and severe background clutter. Furthermore, the detection algorithms must be able to report such obstacles in a timely fashion, imposing severe constraints on their execution time. For this purpose, we have implemented a number of algorithms to detect airborne obstacles using image sequences obtained from a camera mounted on an aircraft. This paper describes the methodology used for characterizing the performance of the dynamic programming obstacle detection algorithm and its special cases. The experimental results were obtained using several types of image sequences, with simulated and real backgrounds. The approximate performance of the algorithm is also theoretically derived using principles of statistical analysis in terms of the signal-to-noise ration (SNR) required for the probabilities of false alarms and misdetections to be lower than prespecified values. The theoretical and experimental performance are compared in terms of the required SNR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.