The role of gap junctional intercellular communication (GJIC) in regulation of normal growth and differentiation is becoming increasingly recognized as a major cellular function. GJIC consists of intercellular exchange of low molecular weight molecules, and is the only means for direct contact between cytoplasms of adjacent animal cells. Disturbances of GJIC have been associated with many pathological conditions, such as carcinogenesis or hereditary illness. Reliable and accurate methods for the determination of GJIC are therefore important in cell biology studies. There are several methods used successfully in numerous laboratories to measure GJIC both in vitro and in vivo. This review comments on techniques currently used to study cell-to-cell communication, either by measuring dye transfer, as in methods like microinjection, scrape loading, gap-fluorescence recovery after photobleaching (gap-FRAP), the preloading assay, and local activation of a molecular fluorescent probe (LAMP), or by measuring electrical conductance and metabolic cooperation. As we will discuss in this review, these techniques are not equivalent but instead provide complementary information. We will focus on their main advantages and limitations. Although biological applications guide the choice of techniques we describe, we also review points that must be taken into consideration before using a methodology, such as the number of cells to analyze.
Cancers located on the internal wall of bladders can be detected in image sequences acquired with endoscopes. The clinical diagnosis and follow-up can be facilitated by building a unique panoramic image of the bladder with the images acquired from different viewpoints. This process, called image mosaicing, consists of two steps. In the first step, consecutive images are pairwise registered to find the local transformation matrices linking geometrically consecutive images. In the second step, all images are placed in a common and global coordinate system. In this contribution, a mutual information-based similarity measure and a stochastic gradient optimization method were implemented in the registration process. However, the images have to be preprocessed in order to register the data in a robust way. Thus, a simple correction method of the distortions affecting endoscopic images is presented. After the placement of all images in the global coordinate system, the parameters of the local transformation matrices are all adjusted to improve the visual aspect of the panoramic images. Phantoms are used to evaluate the global mosaicing accuracy and the limits of the registration algorithm. The mean distances between ground truth positions in the mosaiced image range typically in 1-3 pixels. Results given for in vivo patient data illustrate the ability of the algorithm to give coherent panoramic images in the case of bladders.
We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.