Several studies have reported differences between African Americans and Caucasians in relative proportion of psychotic symptoms and disorders, but whether this reflects racial bias in the assessment of psychosis is unclear. The purpose of this study was to examine the distribution of psychotic symptoms and potential bias in symptoms assessed via semi-structured interview using a cohort of 3,389 African American and 5,692 Caucasian participants who were diagnosed with schizophrenia, schizoaffective disorder, or bipolar disorder. In this cohort, the diagnosis of schizophrenia was relatively more common, and the diagnosis of bipolar disorder and schizoaffective disorder-bipolar type was less relatively common, among African Americans than Caucasians. With regard to symptoms, relatively more African Americans than Caucasians endorsed hallucinations and delusions symptoms, and this pattern was striking among cases diagnosed with bipolar disorder and schizoaffective-bipolar disorder. In contrast, the relative endorsement of psychotic symptoms was more similar among cases diagnosed with schizophrenia and schizoaffective disorder-depressed type. Differential item function analysis revealed that African Americans with mild psychosis over-endorsed "hallucinations in any modality" and underendorsed "widespread delusions" relative to Caucasians. Other symptoms did not show evidence of racial bias. Thus, racial bias in assessment of psychotic symptoms does not appear to explain differences in the proportion of symptoms between Caucasians and African Americans. Rather, this may reflect ascertainment bias, perhaps indicative of a disparity in access to services, or differential exposure to risk factors for psychosis by race. Key words: schizophrenia; race; psychosis INTRODUCTIONOver the last 50 years, several studies have reported differences between individuals of African heritage and Caucasians in relative proportion of psychotic symptoms and diagnosis of schizophrenia versus bipolar disorder. Specifically, reports from samples ascertained in clinical settings have suggested that compared to Caucasians, individuals of African heritage have higher rates of schizophrenia [Robins and Regier, 1991;Strakowski et al., 1993Strakowski et al., , 1996Strakowski et al., , 2003Bresnahan et al., 2007;Gara et al., 2012;Kirkbride et al., 2012] 546Neuropsychiatric Genetics severe psychotic symptoms [Mukherjee et al., 1983;Strakowski et al., 2003;Arnold et al., 2004] and less severe negative symptoms [Sharpley et al., 2001]. Other studies of psychotic symptoms have found race differences in the quality and scope of hallucinations and delusions [Barrio et al., 2003;Yamada et al., 2006].To date, the underlying source of these apparent differences is unknown. It has been argued that the difference could reflect clinician or instrument bias [Neighbors et al., 2003;Trierweiler et al., 2006], although differences remained when clinicians were blinded to ethnicity information [Arnold et al., 2004]. On the other hand, race differences may reflect t...
Multi-modal learning with both text and images benefits multiple applications, such as attribute extraction for e-commerce products. In this paper, we propose Cross-Modality Attention Contrastive Language-Image Pre-training (CMA-CLIP), a new multi-modal architecture to jointly learn the fine-grained inter-modality relationship. It fuses CLIP with a sequencewise attention module and a modality-wise attention module. The network uses CLIP to bridge the inter-modality gap at the global level, and uses the sequence-wise attention module to capture the fine-grained alignment between text and images. Besides, it leverages a modality-wise attention module to learn the relevance of each modality to downstream tasks, making the network robust against irrelevant modalities. CMA-CLIP outperforms the state-of-the-art method on Fashion-Gen by 5.5% in accuracy, achieves competitive performance on Food101 and performance on par with the state-of-the-art method on MM-IMDb. We also demonstrate CMA-CLIP's robustness against irrelevant modalities on an Amazon dataset for the task of product attribute extraction.
On modern e-commerce platforms like Amazon, the number of products is fast growing, precise and efficient product classification becomes a key lever to great customer shopping experience. To tackle the large-scale product classification problem, a major challenge is how to leverage multimodal product information (e.g., image, text). One of the most successful directions is the attention-based deep multimodal learning, where there are mainly two types of frameworks: 1) keyless attention, which learns the importance of features within each modal; and 2) key-based attention, which learns the importance of features using other modalities. In this paper, we propose a novel Two-stream Hybrid Attention Network (HANet), which leverages both key-based and keyless attention mechanisms to capture the key information across product image and title modalities. We experimentally show that our HANet achieves state-of-the-art performance on Amazon-scale product classification problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.