The first step in the infection of humans by microbial pathogens is their adherence to host tissue cells, which is frequently based on the binding of carbohydrate-binding proteins (lectin-like adhesins) to human cell receptors that expose glycans. In only a few cases have the human receptors of pathogenic adhesins been described. A novel strategy—based on the construction of a lectin-glycan interaction (LGI) network—to identify the potential human binding receptors for pathogenic adhesins with lectin activity was developed. The new approach is based on linking glycan array screening results of these adhesins to a human glycoprotein database via the construction of an LGI network. This strategy was used to detect human receptors for virulent Escherichia coli (FimH adhesin), and the fungal pathogens Candida albicans (Als1p and Als3p adhesins) and C. glabrata (Epa1, Epa6, and Epa7 adhesins), which cause candidiasis. This LGI network strategy allows the profiling of potential adhesin binding receptors in the host with prioritization, based on experimental binding data, of the most relevant interactions. New potential targets for the selected adhesins were predicted and experimentally confirmed. This methodology was also used to predict lectin interactions with envelope glycoproteins of human-pathogenic viruses. It was shown that this strategy was successful in revealing that the FimH adhesin has anti-HIV activity.
Living single yeast cells show a specific cellular motion at the nanometer scale with a magnitude that is proportional to the cellular activity of the cell. We characterized this cellular nanomotion pattern of nonattached single yeast cells using classical optical microscopy. The distribution of the cellular displacements over a short time period is distinct from random motion. The range and shape of such nanomotion displacement distributions change substantially according to the metabolic state of the cell. The analysis of the nanomotion frequency pattern demonstrated that single living yeast cells oscillate at relatively low frequencies of around 2 hertz. The simplicity of the technique should open the way to numerous applications among which antifungal susceptibility tests seem the most straightforward.
The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods.
In this paper we propose a novel framework to process Doppler-radar signals for hand gesture recognition. Doppler-radar sensors provide many advantages over other emerging sensing modalities, including low development costs and high sensitivity to capture subtle gestures with precision. Furthermore, they have attractive properties for ubiquitous deployment and can be conveniently embedded into different devices. In this scope, current recognition methods still rely in deep CNN-LSTM and 3D CNN-LSTM structures that require sufficient labelled data to optimize millions of parameters and significant amount of computational resources for inference; which limits their deployment. Indeed, subtle gestures recognition is a challenging task due to the high variability of gestures among different subjects. To overcome the challenges in the recognition task and the limitations of the current methods, we propose a shallow learning approach for gesture recognition, that is based on unsupervised range-Doppler features representation, along with a learnable pooling aggregation via NetVLAD. The proposed framework can encode extremely valuable information across time, and results in features that are highly discriminative for hand gesture recognition. Experimentation on publicly available Doppler-radar data shows that the proposed framework outperforms state-of-the-art approaches in terms of recognition accuracy and speed for sequence-level hand gesture classification.INDEX TERMS Convolutional neural networks, Doppler-radar, feature aggregation, hand gesture recognition, unsupervised representation learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.