We provide a method for automatically detecting change in language across time through a chronologically trained neural language model. We train the model on the Google Books Ngram corpus to obtain word vector representations specific to each year, and identify words that have changed significantly from 1900 to 2009. The model identifies words such as cell and gay as having changed during that time period. The model simultaneously identifies the specific years during which such words underwent change.
Many works in computer vision attempt to solve different tasks such as object detection, scene recognition or attribute detection, either separately or as a joint problem. In recent years, there has been a growing interest in combining the results from these different tasks in order to provide a textual description of the scene. However, when describing a scene, there are many items that can be mentioned. If we include all the objects, relationships, and attributes that exist in the image, the description would be extremely long and not convey a true understanding of the image. We present a novel approach to ranking the importance of the items to be described. Specifically, we focus on the task of discriminating one image from a group of others. We investigate the factors that contribute to the most efficient description that achieves this task. We also provide a quantitative method to measure the description quality for this specific task using data from human subjects and show that our method achieves better results than baseline methods.
Spatial arrangements of people in a photo reflect how people regard themselves in a group. Comparing spatial arrangements of subjects in two group photos can help computer understand similar social semantics and events. In this paper, we incorporate gender information with spatial arrangements of subjects to assess the visual similarity between two group photos. For each group photo, detected faces are represented by a spatial face context. Then, corresponding vertices of spatial face contexts of two group photos are retrieved by graph matching. Gender information and relative face positions are applied to the graph matching results to compute the similarity score. The experimental results show that our method can identify group photos with similar genders of subjects and spatial arrangements compared to the state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.