Zhongyong thinking is a common approach adopted by Chinese people to solve problems encountered in life and work. Based on the four modes of zhongyong thinking proposed by Pang (Social Sciences in China, 1, 1980, 75), this study chooses the “neither A nor B” form, which represents the “mean” (中) characteristics of zhongyong thinking, called eclectic thinking, and the “both A and B” form, which reflects the “harmony” (和) feature, called integrated thinking. This study primed eclectic thinking and integrated thinking, respectively, through self‐compiled problem situations, and 150 college students and postgraduates students were the participants. Experiment 1 explored the role of the priming of zhongyong thinking in three classic creative thinking tasks: a divergent thinking test, remote association test, and insight problem‐solving test. Experiment 2 further examined the effect of priming of zhongyong thinking on “market investment problems” with higher ecological validity. The findings show that priming integrated thinking can improve remote associates test performance and promote creative solutions to market investment problems, but there is no significant impact on the scores of divergent thinking test and insight problem‐solving; priming eclectic thinking has no significant impact on any of the subsequent creative tasks. This study shows that integrated thinking primes cognitive processing related to information association and information integration, promoting subsequent creative tasks.
Scene text recognition (STR) is a challenging task that requires large-scale annotated data for training. However, collecting and labeling real text images is expensive and timeconsuming, which limits the availability of real data. Therefore, most existing STR methods resort to synthetic data, which may introduce domain discrepancy and degrade the performance of STR models. To alleviate this problem, recent semi-supervised STR methods exploit unlabeled real data by enforcing characterlevel consistency regularization between weakly and strongly augmented views of the same image. However, these methods neglect word-level consistency, which is crucial for sequence recognition tasks. This paper proposes a novel semi-supervised learning method for STR that incorporates word-level consistency regularization from both visual and semantic aspects. Specifically, we devise a shortest path alignment module to align the sequential visual features of different views and minimize their distance. Moreover, we adopt a reinforcement learning framework to optimize the semantic similarity of the predicted strings in the embedding space. We conduct extensive experiments on several standard and challenging STR benchmarks and demonstrate the superiority of our proposed method over existing semi-supervised STR methods.
This paper presents a variational approach for bias correction and boundary delineation of Synthetic Aperture Radar (SAR) images with intensity inhomogeneity. The bias fields in SAR images might have a negative impact on boundary delineation. Our approach is implemented by two steps within a unified framework of energy minimization. First we propose a deviation correction method in which no physical parameter is needed. Then we conduct an improved geodesic active contour model using the tensor voting method to delineate boundaries. The advantage of combining the geodesic active contour model and the tensor voting method is that the active contour is more sensitive to weak boundaries.
Existing text recognition methods usually need large-scale training data. Most of them rely on synthetic training data due to the lack of annotated real images. However, there is a domain gap between the synthetic data and real data, which limits the performance of the text recognition models. Recent self-supervised text recognition methods attempted to utilize unlabeled real images by introducing contrastive learning, which mainly learns the discrimination of the text images. Inspired by the observation that humans learn to recognize the texts through both reading and writing, we propose to learn discrimination and generation by integrating contrastive learning and masked image modeling in our self-supervised method. The contrastive learning branch is adopted to learn the discrimination of text images, which imitates the reading behavior of humans. Meanwhile, masked image modeling is firstly introduced for text recognition to learn the context generation of the text images, which is similar to the writing behavior. The experimental results show that our method outperforms previous self-supervised text recognition methods by 10.2%-20.2% on irregular scene text recognition datasets. Moreover, our proposed text recognizer exceeds previous state-of-the-art text recognition methods by averagely 5.3% on 11 benchmarks, with similar model size. We also demonstrate that our pre-trained model can be easily applied to other text-related tasks with obvious performance gain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.