During the last years, Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in image classification. Their architectures have largely drawn inspiration by models of the primate visual system. However, while recent research results of neuroscience prove the existence of non-linear operations in the response of complex visual cells, little effort has been devoted to extend the convolution technique to non-linear forms. Typical convolutional layers are linear systems, hence their expressiveness is limited. To overcome this, various non-linearities have been used as activation functions inside CNNs, while also many pooling strategies have been applied. We address the issue of developing a convolution method in the context of a computational model of the visual cortex, exploring quadratic forms through the Volterra kernels. Such forms, constituting a more rich function space, are used as approximations of the response profile of visual cells. Our proposed second-order convolution is tested on CIFAR-10 and CIFAR-100. We show that a network which combines linear and non-linear filters in its convolutional layers, can outperform networks that use standard linear filters with the same architecture, yielding results competitive with the state-of-the-art on these datasets.
Multi-view capture systems are complex systems to engineer. They require technical knowledge to install and complex processes to setup. However, with the ongoing developments in new production methods, we are now at a position to be able to generate high quality realistic 3D assets. Nonetheless, the capturing systems developed with these methods are intertwined with them, relying on custom solutions and seldom -if not at all -publicly available. We design, develop and publicly offer a multi-view capture system based on the latest RGB-D sensor technology. We also develop a portable and easy-to-use external calibration process to allow for its widespread use.
Depth perception is considered an invaluable source of information for various vision tasks. However, depth maps acquired using consumer-level sensors still suffer from non-negligible noise. This fact has recently motivated researchers to exploit traditional filters, as well as the deep learning paradigm, in order to suppress the aforementioned non-uniform noise, while preserving geometric details. Despite the effort, deep depth denoising is still an open challenge mainly due to the lack of clean data that could be used as ground truth. In this paper, we propose a fully convolutional deep autoencoder that learns to denoise depth maps, surpassing the lack of ground truth data. Specifically, the proposed autoencoder exploits multiple views of the same scene from different points of view in order to learn to suppress noise in a self-supervised endto-end manner using depth and color information during training, yet only depth during inference. To enforce selfsupervision, we leverage a differentiable rendering technique to exploit photometric supervision, which is further regularized using geometric and surface priors. As the proposed approach relies on raw data acquisition, a large RGB-D corpus is collected using Intel RealSense sensors. Complementary to a quantitative evaluation, we demonstrate the effectiveness of the proposed self-supervised denoising approach on established 3D reconstruction applications. Code is avalable at https://github.com/ VCL3D/DeepDepthDenoising
While studies for objective and subjective evaluation of the visual quality of compressed 3D meshes has been discussed in the literature, those studies were covering the evaluation of 3D-meshes that were created either by 3D artists or generated by a computationally expensive 3D reconstruction process applied on high quality 3D scans. With the advent of RGB-D sensors that operate at high frame-rates and the utilization of fast 3D reconstruction algorithms, humans can be captured and 3D reconstructed into a 3D mesh representation in real-time, enabling new (tele-)immersive experiences. The way of producing the respective 3D mesh content is dramatically different between the two cases, leading to apparent structural difference between the output meshes. On one hand, the first type of content is nearly perfect and clean, while on the other hand, the second type is much more irregular and noisy. Thus, evaluating compression artifacts on this new type of immersive 3D media, constitutes a yet unexplored scientific area. In this paper, we aim to subjectively assess the compression artifacts introduced by three open-source static 3D mesh codecs, when compressing 3D meshes generated for immersive experiences. The subjective evaluation of the content is conducted in a Virtual Reality setting, using the forced-choice pairwise comparison methodology with existing reference. The results of this study is a mapping of the compared conditions to a continuous ranking scale that can be used in order to optimize codec choice and compression parameters to achieve optimum balance between bandwidth and perceived quality in tele-immersive platforms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.