Purpose: Everyday social communication emphasizes speech comprehension. To date, most neurobiological models regarding auditory semantic processing are based on alphabetic languages, where the character-based languages such as Chinese are largely underrepresented. Thus, the current study attempted to investigate the neural network of speech comprehension specifically for the Chinese language. Methods: Twenty-two native Mandarin Chinese speakers were imaged while performing a passive listening task of forward and backward sentences. Sentences were used as task stimuli, as sentences compared with words were more frequently utilized in daily speech comprehension. Results: Our results suggested that spoken Chinese sentence comprehension may involve a neural network comprising the left middle temporal gyrus, the left anterior temporal lobe, and the bilateral posterior superior temporal lobes. The occipitotemporal visual cortex was not found to be significantly involved with the sentence-level network of spoken Chinese comprehension, as bottom-up visualization process from homophones to visual forms may be less needed due to the availability of top-down contextual controls in sentence processing. In addition, no significant functional connectivity was observed, likely obscured by the low cognitive demand of the task conditions. Limitations and future directions were discussed. Conclusion: The current Chinese network seems to largely resemble the auditory semantic network for alphabetic languages but with features specific to Chinese. While the left inferior parietal lobule in the dorsal stream may have little involvement in the listening comprehension of Chinese sentences, the ventral neural stream via the temporal cortex appears to be more emphasized. The current findings deepen our understanding of how the semantic nature of spoken Chinese sentences influences the neural mechanism engaged.
The current study aimed to investigate the effect of input enhancement on L2 Chinese classifier learning. Two parallel groups of preliminary-level international participants and one group of Chinese native participants were recruited, and the three groups were matched in Chinese writing experience and group size (n = 28). One group of international participants was randomly selected as the experimental group; they read a classifier-enhanced text for 10 min before performing a writing task. The other international group and the native group served as the L2-learner control group and the L1-learner control group, respectively. These two control groups performed the writing task without text reading. Results showed that likely due to the frequent use of 个/ge4/ and the extensive use of novel classifiers, a greater variety of classifiers were used by the experimental group at a greater frequency as compared with the two control groups. However, given the observation that the experimental group tended to avoid using complex classifier forms and similar classifiers, future CSL instruction is suggested to aim for quality acquisition through long-term application of input enhancement integrated with explicit explanation on a language-use basis. This study furthers our understanding of how input enhancement is applicable to the acquisition of a logographic second language.
Christopoulos, for their constructive comments along the journey, which have not only helped me to integrate Study 1-3 into a coherent thesis, but also allow for the entire thesis to be substantially improved. Many thanks go to my colleagues and friends who have assisted me with participant recruitment, data analysis, rationale establishment, and proof reading through pushing forward my studies. In particular, I am truly impressed by the great support and help that have been
In this abstract, we present a novel method using the deep convolutional neural network combined with traditional mechanical control techniques to solve the problem of determining whether a robotic grasp is successful or not. To finish the task, we construct a data acquisition platform capable of robot arm grasping and photo capturing, and collect a diversity of pictures by adjusting the shape and posture of the objects and controlling the robot arm to move randomly. For the purpose of validating the generalization capability, we adopt a stochastic sampling method based on cross validation to test our model. The experiment shows that, with an increasing number of shapes of objects involved in training, the network can identify new samples in a more accurate and steadier way. The accuracy rises from 89.2% when we use only one category of shape for training to above 99.7% when we use 17 categories for training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.