Abstract-We propose a novel framework for a new class of two-channel biorthogonal filter banks. The framework covers two useful subclasses: i) causal stable IIR filter banks ii) linear phase FIR filter banks. There exists a very effcient structurally perfect reconstruction implementation for such a class. Filter banks of high frequency selectivity can be achieved by using the proposed framework with low complexity. The properties of such a class are discussed in detail. The design of the analysis/synthesis systems reduces to the design of a single transfer function. Very simple design methods are given both for FIR and IIR cases. Zeros of arbitrary multiplicity at aliasing frequency can be easily imposed, for the purpose of generating wavelets with regularity property. In the IIR case, two new classes of IIR maximally f i t filters different from Butterworth filters are introduced. The filter coefficients are given in closed form. The wavelet bases corresponding to the biorthogonal systems are generated. We also provide a novel mapping of the proposed 1-D framework into 2-D. The mapping preserves the following: i) perfect reconstruction ii) stability in the IIR case iii) linear phase in the FIR case iv) zeros at aliasing frequency v) frequency characteristic of the filters.
Purpose-To report an image segmentation algorithm that was developed to provide quantitative thickness measurement of 6 retinal layers in optical coherence tomography (OCT) images. Design-Prospective cross-sectional study.Methods-Imaging was performed with time and spectral domain OCT instruments in 15 and 10 normal healthy subjects, respectively. A dedicated software algorithm was developed for boundary detection based on a 2-D edge detection scheme, enhancing edges along the retinal depth while suppressing speckle noise. Automated boundary detection and quantitative thickness measurements derived by the algorithm were compared with measurements obtained from boundaries manually marked by 3 observers. Thickness profiles for 6 retinal layers were generated in normal subjects.Results-The algorithm identified 7 boundaries and measured thickness of 6 retinal layers: nerve fiber layer (NFL), inner plexiform layer and ganglion cell layer (IPL+GCL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer and photoreceptor inner segments (ONL +PIS), and photoreceptor outer segments (POS). The root mean squared error (RMSE) between the manual and automatic boundary detection ranged between 4 and 9 microns. The mean absolute values of differences between automated and manual thickness measurements were between 3 -4 microns, and comparable to inter-observer differences. Inner retinal thickness profiles demonstrated minimum thickness at the fovea, corresponding to normal anatomy. The OPL and ONL+PIS thickness profiles displayed a minimum and maximum thickness at the fovea, respectively. The POS thickness profile was relatively constant along the scan through the fovea.Conclusions-The application of this image segmentation technique is promising for investigating thickness changes of retinal layers due to disease progression and therapeutic intervention.
Gesture and speech combine to form a rich basis for human conversational interaction. To exploit these modalities in HCI, we need to understand the interplay between them and the way in which they support communication. We propose a framework for the gesture research done to date, and present our work on the cross-modal cues for discourse segmentation in free-form gesticulation accompanying speech in natural conversation as a new paradigm for such multimodal interaction. The basis for this integration is the psycholinguistic concept of the coequal generation of gesture and speech from the same semantic intent. We present a detailed case study of a gesture and speech elicitation experiment in which a subject describes her living space to an interlocutor. We perform F. Quek et al.two independent sets of analyses on the video and audio data: video and audio analysis to extract segmentation cues, and expert transcription of the speech and gesture data by microanalyzing the videotape using a frame-accurate videoplayer to correlate the speech with the gestural entities. We compare the results of both analyses to identify the cues accessible in the gestural and audio data that correlate well with the expert psycholinguistic analysis. We show that "handedness" and the kind of symmetry in two-handed gestures provide effective supersegmental discourse cues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.