Recent outstanding results of supervised object detection in competitions and challenges are often associated with specific metrics and datasets. The evaluation of such methods applied in different contexts have increased the demand for annotated datasets. Annotation tools represent the location and size of objects in distinct formats, leading to a lack of consensus on the representation. Such a scenario often complicates the comparison of object detection methods. This work alleviates this problem along the following lines: (i) It provides an overview of the most relevant evaluation methods used in object detection competitions, highlighting their peculiarities, differences, and advantages; (ii) it examines the most used annotation formats, showing how different implementations may influence the assessment results; and (iii) it provides a novel open-source toolkit supporting different annotation formats and 15 performance metrics, making it easy for researchers to evaluate the performance of their detection algorithms in most known datasets. In addition, this work proposes a new metric, also included in the toolkit, for evaluating object detection in videos that is based on the spatio-temporal overlap between the ground-truth and detected bounding boxes.
In this paper, we address the problem of no-reference quality assessment for digital pictures corrupted with blur. We start with the generation of a large real image database containing pictures taken by human users in a variety of situations, and the conduction of subjective tests to generate the ground truth associated to those images. Based upon this ground truth, we select a number of high quality pictures and artificially degrade them with different intensities of simulated blur (gaussian and linear motion), totalling 6000 simulated blur images. We extensively evaluate the performance of state-of-the-art strategies for no-reference blur quantification in different blurring scenarios, and propose a paradigm for blur evaluation in which an effective method is pursued by combining several metrics and low-level image features. We test this paradigm by designing a no-reference quality assessment algorithm for blurred images which combines different metrics in a classifier based upon a neural network structure. Experimental results show that this leads to an improved performance that better reflects the images' ground truth. Finally, based upon the real image database, we show that the proposed method also outperforms other algorithms and metrics in realistic blur scenarios.
In this paper, the multidimensional multiscale parser (MMP) is employed for encoding electromyographic signals. The experiments were carried out with real signals acquired in laboratory and show that the proposed scheme is effective, outperforming even wavelet-based state-of-the-art schemes present in the literature in terms of percent root mean square difference x compression ratio.
Light field imaging is a promising new technology that allows the user not only to change the focus and perspective after taking a picture, as well as to generate 3D content, among other applications. However, light field images are characterized by large amounts of data and there is a lack of coding tools to efficiently encode this type of content. Therefore, this paper proposes the addition of two new prediction tools to the HEVC framework, to improve its coding efficiency. The first tool is based on the local linear embedding-based prediction and the second one is based on the self-similarity compensated prediction. Experimental results show improvements over JPEG and HEVC in terms of average bitrate savings of 71.44% and 31.87%, and average PSNR gains of 4.73dB and 0.89dB, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.