Animal cells use an unknown mechanism to control their growth and physical size. Here, using the fluorescence exclusion method, we measure cell volume for adherent cells on substrates of varying stiffness. We discover that the cell volume has a complex dependence on substrate stiffness and is positively correlated with the size of the cell adhesion to the substrate. From a mechanical force-balance condition that determines the geometry of the cell surface, we find that the observed cell volume variation can be predicted quantitatively from the distribution of active myosin through the cell cortex. To connect cell mechanical tension with cell size homeostasis, we quantified the nuclear localization of YAP/TAZ, a transcription factor involved in cell growth and proliferation. We find that the level of nuclear YAP/TAZ is positively correlated with the average cell volume. Moreover, the level of nuclear YAP/TAZ is also connected to cell tension, as measured by the amount of phosphorylated myosin. Cells with greater apical tension tend to have higher levels of nuclear YAP/TAZ and a larger cell volume. These results point to a size-sensing mechanism based on mechanical tension: the cell tension increases as the cell grows, and increasing tension feeds back biochemically to growth and proliferation control.
Inspired by Curriculum Learning, we propose a consecutive (i.e. image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps. Contrary to generating the full radiology report from the image at once, the model generates global concepts from the image in the first step and then reforms them into finer and coherent texts using transformer based architecture. We follow the transformer based sequence-to-sequence paradigm at each step. We improve upon the state-of-the-art on two benchmark datasets.
Objectives The first objective of this study was to implement and assess the performance and reliability of a vision transformer (ViT)-based deep-learning model, an ‘off-the-shelf’ artificial intelligence solution, for identifying distinct signs of microangiopathy in nailfold capilloroscopy (NFC) images of patients with SSc. The second objective was to compare the ViT’s analysis performance with that of practising rheumatologists. Methods NFC images of patients prospectively enrolled in our European Scleroderma Trials and Research group (EUSTAR) and Very Early Diagnosis of Systemic Sclerosis (VEDOSS) local registries were used. The primary outcome investigated was the ViT’s classification performance for identifying disease-associated changes (enlarged capillaries, giant capillaries, capillary loss, microhaemorrhages) and the presence of the scleroderma pattern in these images using a cross-fold validation setting. The secondary outcome involved a comparison of the ViT’s performance vs that of rheumatologists on a reliability set, consisting of a subset of 464 NFC images with majority vote–derived ground-truth labels. Results We analysed 17 126 NFC images derived from 234 EUSTAR and 55 VEDOSS patients. The ViT had good performance in identifying the various microangiopathic changes in capillaries by NFC [area under the curve (AUC) from 81.8% to 84.5%]. In the reliability set, the rheumatologists reached a higher average accuracy, as well as a better trade-off between sensitivity and specificity compared with the ViT. However, the annotators’ performance was variable, and one out of four rheumatologists showed equal or lower classification measures compared with the ViT. Conclusions The ViT is a modern, well-performing and readily available tool for assessing patterns of microangiopathy on NFC images, and it may assist rheumatologists in generating consistent and high-quality NFC reports; however, the final diagnosis of a scleroderma pattern in any individual case needs the judgement of an experienced observer.
Inspired by Curriculum Learning, we propose a consecutive (i.e., image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps. Contrary to generating the full radiology report from the image at once, the model generates global concepts from the image in the first step and then reforms them into finer and coherent texts using a transformer architecture. We follow the transformer-based sequence-tosequence paradigm at each step. We improve upon the state-of-the-art on two benchmark datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.