A multiresolution approach based on a modified wavelet transform called the tree-structured wavelet transform or wavelet packets is proposed. The development of this transform is motivated by the observation that a large class of natural textures can be modeled as quasi-periodic signals whose dominant frequencies are located in the middle frequency channels. With the transform, it is possible to zoom into any desired frequency channels for further decomposition. In contrast, the conventional pyramid-structured wavelet transform performs further decomposition in low-frequency channels. A progressive texture classification algorithm which is not only computationally attractive but also has excellent performance is developed. The performance of the present method is compared with that of several other methods.
structed from all the AWT coefficients and from reduced coefficients are investigated, and the data reduction criteria are obtained from uniform resampling. The proposed reconstruction algorithm is found to allow increased rate of reduction.The auditory wavelet transform simulates the human auditory periphery as a first-order approximation because the wavelet theory requires the use of time invariant filters that all have the same shape on a logarithmic scale. The filtering function at each point along the length of the human cochlea is dynamically adjusted according to the input sound pressure and other factors. The variable filters should be realized in future analysis/synthesis auditory models. The auditory wavelet transform and the reconstruction algorithm may nevertheless improve signal production for auditory psychological experiments and other applications.
ACKNOWLEDGMENTSWe thank Masaaki Hondo, Kazuhiko Kakehi, and Tatsuya Hirahara, as well as the members of our research group, for their valuable advice and comments.
In this paper, we propose the problem of collaborative perception, where robots can combine their local observations with those of neighboring agents in a learnable way to improve accuracy on a perception task. Unlike existing work in robotics and multi-agent reinforcement learning, we formulate the problem as one where learned information must be shared across a set of agents in a bandwidth-sensitive manner to optimize for scene understanding tasks such as semantic segmentation. Inspired by networking communication protocols, we propose a multi-stage handshake communication mechanism where the neural network can learn to compress relevant information needed for each stage. Specifically, a target agent with degraded sensor data sends a compressed request, the other agents respond with matching scores, and the target agent determines who to connect with (i.e., receive information from). We additionally develop the AirSim-CP dataset and metrics based on the AirSim simulator where a group of aerial robots perceive diverse landscapes, such as roads, grasslands, buildings, etc. We show that for the semantic segmentation task, our handshake communication method significantly improves accuracy by approximately 20% over decentralized baselines, and is comparable to centralized ones using a quarter of the bandwidth.
Abstract. Computer-aided segmentation of cardiac images obtained by various modalities plays an important role and is a prerequisite for a wide range of cardiac applications by facilitating the delineation of anatomical regions of interest. Numerous computerized methods have been developed to tackle this problem. Recent studies employ sophisticated techniques using available cues from cardiac anatomy such as geometry, visual appearance, and prior knowledge. In addition, new minimization and computational methods have been adopted with improved computational speed and robustness. We provide an overview of cardiac segmentation techniques, with a goal of providing useful advice and references. In addition, we describe important clinical applications, imaging modalities, and validation methods used for cardiac segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.