Background:The correct diagnosis of schizophrenia is essential to reduce the economic burden and avoid worsening patients’ comorbidities. However, current clinical diagnosis is subjective and time consuming. We propose a deep learning method using the bidirectional encoder representations from transformers (BERT) to identify lexical incoherence related to schizophrenia.
Methods:We use a fine-tuned BERT model to extract schizophrenia-related text features and detect possible schizophrenia. Our study involves the enrollment of 13 participants diagnosed with schizophrenia and 13 participants without schizophrenia. Following the collection of speech data, we create a training set by sampling from 10 speakers in each group. Subsequently, the remaining speakers' data is reserved for external testing to assess the model's performance.
Results:After adjusting the parameters of the BERT model, we achieve excellent detection results, with an average accuracy of 84%, 95% of true positives, and an F1 score of 0.806. These results underscore the efficacy of our proposed system in identifying lexical incoherence related to schizophrenia.
Conclusions:Our proposed method, leveraging the deep learning BERT model, shows promise in contributing to schizophrenia diagnosis. The model's self-attention mechanism successfully extracts representative schizophrenia-related text features, providing an objective indicator for psychiatrists. With ongoing refinement, the BERT model serves as a valuable auxiliary tool for expedited and objective schizophrenia diagnosis, ultimately alleviating societal economic burdens and preventing major complications in patients.