As the demand for large-scale video analysis increases, video retrieval research is also becoming more active. In 2014, ISO/IEC MPEG began standardizing compact descriptors for video analysis, known as CDVA, and it is now adopted as a standard. However, the standardized CDVA is not easily compared to other methods because the MPEG-CDVA dataset used for performance verification is not disclosed, despite the fact that follow-up studies are underway with multiple versions of the CDVA experimental model. In addition, analyses of modules constituting the CDVA framework are insufficient in previous studies. Therefore, we conduct self-evaluations of CDVA to analyze the impact of each module on the retrieval task. Furthermore, to overcome the obstacles identified through these self-evaluations, we suggest temporal nested invariance pooling, abbreviated as TNIP, which implies temporal robustness realized by improving nested invariance pooling, abbreviated as NIP, one of the features in CDVA. Finally, benchmarks of the existing CDVA and the proposed approach are provided on several public datasets. Through this, we show that the CDVA framework is capable of boosting the retrieval performance if utilizing the proposed approach.INDEX TERMS Content based retrieval, information representation, MPEG standards.
Customer demands for product search are growing as a result of the recent growth of the e-commerce market. According to this trend, studies on object-centric retrieval using product images have emerged, but it is difficult to respond to complex user-environment scenarios and a search requires a vast amount of data. In this paper, we propose the Video E-commerce Retrieval Dataset (VERD), which utilizes user-perspective videos. In addition, a benchmark and additional experiments are presented to demonstrate the need for independent research on product-centered video-based retrieval. VERD is publicly accessible for academic research and can be downloaded by contacting the author by email.
In this paper, multispectral pedestrian detection is mainly discussed, which can contribute to assigning human-aware properties to automated forklifts to prevent accidents, such as collisions, at an early stage. Since there was no multispectral pedestrian detection dataset in an intralogistics domain, we collected a dataset; the dataset employs a method that aligns image pairs with different domains, i.e. RGB and thermal, without the use of a cumbersome device such as a beam splitter, but rather by exploiting the disparity between RGB sensors and camera geometry. In addition, we propose a multispectral pedestrian detector called SSD 2.5D that can not only detect pedestrians but also estimate the distance between an automated forklift and workers. In extensive experiments, the performance of detection and centroid localization is validated with respect to evaluation metrics used in the driving car domain but with distinct categories, such as hazardous zone and warning zone, to make it more applicable to the intralogistics domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.