“…After the article search, some automatic data extraction was performed using the BertForQuestionAnswering tool, a specific model architecture based on the Bidirectional Encoder Representations from Transformers (BERT) ( 16 ) framework, designed specifically for question-answering tasks. The model was trained using the Stanford Question Answering Dataset (SQuAD) ( 17 ) and re-trained using the specific dataset extracted from the PubMed database to enable this step. The limitation of the SQuAD dataset lies in its reliance on general knowledge derived from a set of Wikipedia articles.…”
BackgroundLanguage disturbances are a core feature of schizophrenia, often studied as a formal thought disorder. The neurobiology of language in schizophrenia has been addressed within the same framework, that language and thought are equivalents considering symptoms and not signs. This review aims to systematically examine published peer-reviewed studies that employed neuroimaging techniques to investigate aberrant brain-language networks in individuals with schizophrenia in relation to linguistic signs.MethodsWe employed a language model for automatic data extraction. We selected our studies according to the PRISMA recommendations, and we conducted the quality assessment of the selected studies according to the STROBE guidance.ResultsWe analyzed the findings from 37 studies, categorizing them based on patient characteristics, brain measures, and language task types. The inferior frontal gyrus (IFG) and superior temporal gyrus (STG) exhibited the most significant differences among these studies and paradigms.ConclusionsWe propose guidelines for future research in this field based on our analysis. It is crucial to investigate larger networks involved in language processing, and language models with brain metrics must be integrated to enhance our understanding of the relationship between language and brain abnormalities in schizophrenia
“…After the article search, some automatic data extraction was performed using the BertForQuestionAnswering tool, a specific model architecture based on the Bidirectional Encoder Representations from Transformers (BERT) ( 16 ) framework, designed specifically for question-answering tasks. The model was trained using the Stanford Question Answering Dataset (SQuAD) ( 17 ) and re-trained using the specific dataset extracted from the PubMed database to enable this step. The limitation of the SQuAD dataset lies in its reliance on general knowledge derived from a set of Wikipedia articles.…”
BackgroundLanguage disturbances are a core feature of schizophrenia, often studied as a formal thought disorder. The neurobiology of language in schizophrenia has been addressed within the same framework, that language and thought are equivalents considering symptoms and not signs. This review aims to systematically examine published peer-reviewed studies that employed neuroimaging techniques to investigate aberrant brain-language networks in individuals with schizophrenia in relation to linguistic signs.MethodsWe employed a language model for automatic data extraction. We selected our studies according to the PRISMA recommendations, and we conducted the quality assessment of the selected studies according to the STROBE guidance.ResultsWe analyzed the findings from 37 studies, categorizing them based on patient characteristics, brain measures, and language task types. The inferior frontal gyrus (IFG) and superior temporal gyrus (STG) exhibited the most significant differences among these studies and paradigms.ConclusionsWe propose guidelines for future research in this field based on our analysis. It is crucial to investigate larger networks involved in language processing, and language models with brain metrics must be integrated to enhance our understanding of the relationship between language and brain abnormalities in schizophrenia
“…It has promising performance in unseen NLP tasks ( Papadopoulos, Panagakis, Koubarakis, & Nicolaou, 2022 ), in medical reports processing ( Donnelly, Grzeszczuk, & Guimaraes, 2022 ). Also in text summarizing ( Patil, Rao, Reddy, Ram, & Meena, 2022 ), audio captioning ( Liu, Mei et al, 2022 ), and analysis of natural language ( Guven & Unalir, 2022 ) the BERT annotation has promising performance. Besides these uses there are three reasons why BERT is likely to be a game-changer in NLP as this is a bidirectional model which combines Mask Language Model (MLM) and Next Sentence Prediction (NSP) to understand the context-heavy text.…”
“…Medved et al [27] designed an automatic QA model using TF-IDF. Second, for machine reading-based QA models, Guven et al [28] used natural language processing (NLP) models to improve the performance, especially when unrelated sentences are included in the dataset. They introduced three kinds of NLP models to select relevant sentences: 1) remove and compare (RC), 2) searching with named entity recognition (SNER), and 3) searching with part-of-speech (POS) tagging (SPOS).…”
Section: Related Work 21 Question-answering Matching Modelsmentioning
Question-answering (QA) models find answers to a given question. The necessity of automatically finding answers is increasing because it is very important and challenging from the large-scale QA data sets. In this paper, we deal with the QA pair matching approach in QA models, which finds the most relevant question and its recommended answer for a given question. Existing studies for the approach performed on the entire dataset or datasets within a category that the question writer manually specifies. In contrast, we aim to automatically find the category to which the question belongs by employing the text classification model and to find the answer corresponding to the question within the category. Due to the text classification model, we can effectively reduce the search space for finding the answers to a given question. Therefore, the proposed model improves the accuracy of the QA matching model and significantly reduces the model inference time. Furthermore, to improve the performance of finding similar sentences in each category, we present an ensemble embedding model for sentences, improving the performance compared to the individual embedding models. Using real-world QA data sets, we evaluate the performance of the proposed QA matching model. As a result, the accuracy of our final ensemble embedding model based on the text classification model is 81.18%, which outperforms the existing models by 9.81%∼14.16% point. Moreover, in terms of the model inference speed, our model is faster than the existing models by 2.61∼5.07 times due to the effective reduction of search spaces by the text classification model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.