IntroductionOur objective in the present study was to determine the signaling pathway of interleukin 10 (IL-10) for modulating IL-17 expression in macrophages and the importance of this mediation in collagen-induced arthritis (CIA).MethodsIL-10-knockout (IL-10−/−) mice and wild-type (WT) mice were immunized with chicken type II collagen (CII) to induce arthritis. The expression levels of IL-17 and retinoid-related orphan receptor γt (RORγt) in macrophages and joint tissues of IL-10−/− and WT mice were analyzed by enzyme-linked immunosorbent assay, quantitative RT-PCR (qRT-PCR) and Western blotting. The F4/80 macrophages and positive IL-17-producing macrophages in synovial tissues of the mice were determined by immunohistochemistry. The populations of classically activated macrophage (M1) and alternatively activated macrophage (M2) phenotypes were analyzed by flow cytometry. The expression of genes associated with M1 and M2 markers was analyzed by qRT-PCR.ResultsCompared to WT mice, IL-10−/− mice had exacerbated CIA development, which was associated with increased production of T helper 17 cell (Th17)/Th1 proinflammatory cytokines and CII-specific immunoglobulin G2a antibody after CII immunization. Macrophages in IL-10−/− mice had increased amounts of IL-17 and RORγt compared with the amounts in WT mice with CIA. Immunofluorescence microscopy showed that the number of IL-17-producing macrophages in synovial tissues was significantly higher in IL-10−/− mice than in WT mice. IL-10 deficiency might promote macrophage polarization toward the proinflammatory M1 phenotype, which contributes to the rheumatoid arthritis inflammation response.ConclusionIL-10 inhibits IL-17 and RORγt expression in macrophages and suppresses macrophages toward the proinflammatory M1 phenotype, which is important for the role of IL-10 in mediating the pathogenesis of CIA.
Vision Transformers (ViTs) take all the image patches as tokens and construct multi-head self-attention (MHSA) among them. Complete leverage of these image tokens brings redundant computations since not all the tokens are attentive in MHSA. Examples include that tokens containing semantically meaningless or distractive image backgrounds do not positively contribute to the ViT predictions. In this work, we propose to reorganize image tokens during the feed-forward process of ViT models, which is integrated into ViT during training. For each forward inference, we identify the attentive image tokens between MHSA and FFN (i.e., feed-forward network) modules, which is guided by the corresponding class token attention. Then, we reorganize image tokens by preserving attentive image tokens and fusing inattentive ones to expedite subsequent MHSA and FFN computations. To this end, our method EViT improves ViTs from two perspectives. First, under the same amount of input image tokens, our method reduces MHSA and FFN computation for efficient inference. For instance, the inference speed of DeiT-S is increased by 50% while its recognition accuracy is decreased by only 0.3% for ImageNet classification. Second, by maintaining the same computational cost, our method empowers ViTs to take more image tokens as input for recognition accuracy improvement, where the image tokens are from higher resolution images. An example is that we improve the recognition accuracy of DeiT-S by 1% for ImageNet classification at the same computational cost of a vanilla DeiT-S. Meanwhile, our method does not introduce more parameters to ViTs. Experiments on the standard benchmarks show the effectiveness of our method. The code is available at https://github.com/youweiliang/evit
In this paper, a novel Multiview CLOUD (mCLOUD) visual feature extraction mechanism is proposed for the task of categorizing clouds based on ground-based images. To completely characterize the different types of clouds, mCLOUD first extracts the raw visual descriptors from the views of texture, structure, and color simultaneously, in a densely sampled way-specifically, the scale invariant feature transform (SIFT), the census transform histogram (CENTRIST), and the statistical color features are extracted, respectively. To obtain a more descriptive cloud representation, the feature encoding of the raw descriptors is realized by using the Fisher vector. This is followed by the feature aggregation procedure. A linear support vector machine (SVM) is employed as the classifier to yield the final cloud image categorization result. The experiments on a challenging cloud dataset termed the six-class Huazhong University of Science and Technology (HUST) cloud demonstrate that mCLOUD consistently outperforms the state-of-the-art cloud classification approaches by large margins (at least 6.9%) under all the different experimental settings. It has also been verified that, compared to the single view, the multiview cloud representation generally enhances the performance.
Echocardiographic evidence of prepacing systolic dyssynchrony measured by TDI velocity, but not TDI strain, predicted lower long-term cardiovascular events after CRT.
School bullying is a common social problem, which affects children both mentally and physically, making the prevention of bullying a timeless topic all over the world. This paper proposes a method for detecting bullying in school based on activity recognition and speech emotion recognition. In this method, motion and voice data are gathered by movement sensors and a microphone, followed by extraction of a set of motion and audio features to distinguish bullying incidents from daily life events. Among extracted motion features are both time-domain and frequency-domain features, while audio features are computed with classical MFCCs. Feature selection is implemented using the wrapper approach. At the next stage, these motion and audio features are merged to form combined feature vectors for classification, and LDA is used for further dimension reduction. A BPNN is trained to recognize bullying activities and distinguish them from normal daily life activities. The authors also propose an action transition detection method to reduce computational complexity for practical use. Thus, the bullying detection algorithm will only run, when an action transition event has been detected. Simulation results show that the combined motion-audio feature vector outperforms separate motion features and acoustic features, achieving an accuracy of 82.4% and a precision of 92.2%. Moreover, with the action transition method, the computation cost can be reduced by half.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.