Abstract-Automated fire detection is an active research topic in computer vision. In this paper, we propose and analyze a new method for identifying fire in videos. Computer vision-based fire detection algorithms are usually applied in closed-circuit television surveillance scenarios with controlled background. In contrast, the proposed method can be applied not only to surveillance but also to automatic video classification for retrieval of fire catastrophes in databases of newscast content. In the latter case, there are large variations in fire and background characteristics depending on the video instance. The proposed method analyzes the frame-to-frame changes of specific low-level features describing potential fire regions. These features are color, area size, surface coarseness, boundary roughness, and skewness within estimated fire regions. Because of flickering and random characteristics of fire, these features are powerful discriminants. The behavioral change of each one of these features is evaluated, and the results are then combined according to the Bayes classifier for robust fire recognition. In addition, a priori knowledge of fire events captured in videos is used to significantly improve the classification results. For edited newscast videos, the fire region is usually located in the center of the frames. This fact is used to model the probability of occurrence of fire as a function of the position. Experiments illustrated the applicability of the method.
A robust approach for joint motion and disparity estimation in stereo sequences to synthesize arbitrary intermediate views is presented. The improved concept for stereo image analysis is based on a modified block matching algorithm. In which a cost function consisting of area-based correlation together with an appropriately weighted temporal smoothness term is applied. A confidence measure to evaluate the reliability of estimated correspondences is introduced. In occluded image areas and image points with unreliable motion or disparity assignments, considerable improvements are obtained applying an edge-assisted vector interpolation strategy. Two different image synthesis concepts are presented as well. The reported approach is verified by processing a set of sequences taken with stereo cameras having large interaxial distances. Computer simulations show that telepresence illusion with continuous motion parallax and good image quality can be obtained using the methods presented
The new video coding standard, HEVC, was developed to succeed the current standard, H.264/AVC, as the state of the art in video compression. However, there is a lot of legacy content encoded with H.264/AVC. This paper proposes and evaluates several transcoding algorithms from the H.264/AVC to the HEVC format. In particular, a novel transcoding architecture, in which the first frames of the sequence are used to compute the parameters so that the transcoder can "learn" the mapping for that particular sequence, is proposed. Then, two types of mode mapping algorithms are proposed. In the first solution, a single H.264/AVC coding parameter is used to determine the outgoing HEVC partitions using dynamic thresholding. The second solution uses linear discriminant functions to map the incoming H.264/AVC coding parameters to the outgoing HEVC partitions. This paper contains experiments designed to study the impact of the number of frames used for training in the transcoder. Comparisons with existing transcoding solutions reveal that the proposed work results in lower rate-distortion loss at a competitive complexity performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.