Illegal distribution of a digital movie is a significant threat to the film industries. With the advent of high-speed broadband Internet access, a pirated copy of a digital video can be easily distributed to a global audience. Digital video watermarking is a possible means of limiting this type of digital distribution. In existing watermarking methods, the watermark is usually embedded into the luminance channel of a video frame which affects imperceptibility. In addition, none of the existing techniques are robust to the combination of commonly used attacks, such as compression, upscaling, rotation, cropping, downscaling in resolution, frame rate conversion, and camcording. In this paper, we initially propose a basic blind digital video watermarking algorithm, where the watermark is embedded into one level of the dual-tree complex wavelet transform of the chrominance channel to provide high quality watermarked video and extracted using the same key that was used for embedding. This algorithm is robust to compression, upscaling, rotation, and cropping. An extension of this method extracts the watermark from any level(s) of the dual-tree complex wavelet transform depending on the resolution of the downscaled version of the watermarked frame rather than only from the embedding level to survive downscaling to an arbitrary resolution. Finally, the watermark of a frame is extracted from the information of that frame without using the key that was used during watermark embedding to provide robustness to temporal synchronization attacks, such as frame rate conversion. This scheme is also robust to compression, camcording, watermark estimation remodulation, temporal frame averaging, multiple watermark embedding, downscaling in resolution, and other geometric attacks, such as upscaling, rotation, and cropping.
The use of passive infrared (PIR) triggered camera traps has dramatically increased in recent decades. Unfortunately, technical descriptions of how PIR triggered camera traps operate have not been sufficiently clear. Descriptions have often been ambiguous or misleading and in several cases are demonstrably wrong. Such descriptions have led to erroneous interpretations of camera trapping data. This short communication clarifies how PIR sensors operate. We clarify how infrared radiation is emitted and transmitted, and we describe the parts of the PIR sensor and how they detect infrared radiation and, by extension, fauna. Several problematic descriptions of PIR sensors are drawn on to highlight flawed descriptions and demonstrate where erroneous interpretations of camera trapping data occurred. By clarifying the language and the description of PIR triggered camera traps, this paper ensures that wildlife researchers and managers using camera traps will avoid flawed interpretations of their data. Avoiding flawed interpretations of data should reduce wasted effort and resources that would otherwise come about as researchers attempt to test flawed hypotheses. Furthermore, this paper provides a thorough technical reference for camera trapping practitioners, which is not present elsewhere in the wildlife research literature.
In this paper, we present a novel approach to planetary rover localization that incorporates sun sensor and inclinometer data directly into a stereo visual odometry pipeline. Utilizing the absolute orientation information provided by the sun sensor and inclinometer significantly reduces the error growth of the visual odometry path estimate. The measurements have very low computation, power, and mass requirements, providing localization improvement at nearly negligible cost. We describe the mathematical formulation of error terms for the stereo camera, sun sensor, and inclinometer measurements, as well as the bundle adjustment framework for determining the maximum likelihood vehicle transformation. Extensive results are presented from experimental trials utilizing data collected during a 10‐km traversal of a Mars analogue site on Devon Island in the Canadian high Arctic. We also illustrate how our approach can be used to reduce the computational burden of visual odometry for planetary exploration missions. © 2012 Wiley Periodicals, Inc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.