Understanding fashion styles and trends is of great potential interest to retailers and consumers alike. The photos people upload to social media are a historical and public data source of how people dress across the world and at different times. While we now have tools to automatically recognize the clothing and style attributes of what people are wearing in these photographs, we lack the ability to analyze spatial and temporal trends in these attributes or make predictions about the future. In this paper we address this need by providing an automatic framework that analyzes large corpora of street imagery to (a) discover and forecast longterm trends of various fashion attributes as well as automatically discovered styles, and (b) identify spatio-temporally localized events that affect what people wear. We show that our framework makes long term trend forecasts that are > 20% more accurate than prior art, and identifies hundreds of socially meaningful events that impact fashion across the globe. The supplementary material can be found at
A long legacy of media imagery persistently distorts, stereotypes, and ignores marginalized racial and ethnic groups despite widespread calls to diversify media representations. In particular, fashion and beauty media continue to feature light-skinned models and celebrities over dark-skinned individuals, even lightening dark skin with photo editing to achieve ideals of whiteness and lightness. This practice aligns with colorism, or the privileging of light skin tones for access to economic and social capital. This study examines colorism in a particular genre of digital photography, online retail images, as a problem of visual representation. The novel method of visual computational analysis is used to quantitatively compare how mainstream clothing retail brands represent model skin tones across still and video media modes. The findings suggest that analyzed retailers tended to favor light-skinned models on their websites and that model skin tones in product videos were significantly darker than in product photos. These findings are considered through research on race and technology, photographic manipulation, and media misinformation. Ultimately, the study suggests that visual (in)consistencies can reveal the role of structural biases in shaping media representations. The article also provides a methodological tool for conducting this work.
The fashion sense-meaning the clothing styles people wear-in a geographical region can reveal information about that region. For example, it can reflect the kind of activities people do there, or the type of crowds that frequently visit the region (e.g., tourist hot spot, student neighborhood, business center). We propose a method to automatically create underground neighborhood maps of cities by analyzing how people dress. Using publicly available images from across a city, our method finds neighborhoods with a similar fashion sense and segments the map without supervision. For 37 cities worldwide, we show promising results in creating good underground maps, as evaluated using experiments with human judges and underground map benchmarks derived from non-image data. Our approach further allows detecting distinct neighborhoods (what is the most unique region of LA?) and answering analogy questions between cities (what is the "Downtown LA" of Bogota?). The supplementary can be found at: www.cs.cornell. edu/∼utkarshm/underground maps/supplementary.pdf "The map is not the thing mapped."-Eric Temple Bell
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.