Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods.
Attention models are proposed in sentiment analysis because some words are more important than others. However, most existing methods either use local context based text information or user preference information. In this work, we propose a novel attention model trained by cognition grounded eye-tracking data. A reading prediction model is first built using eye-tracking data as dependent data and other features in the context as independent data. The predicted reading time is then used to build a cognition based attention (CBA) layer for neural sentiment analysis. As a comprehensive model, We can capture attentions of words in sentences as well as sentences in documents. Different attention mechanisms can also be incorporated to capture other aspects of attentions. Evaluations show the CBA based method outperforms the state-of-the-art local context based attention methods significantly. This brings insight to how cognition grounded data can be brought into NLP tasks.
Affective lexicon is one of the most important resource in affective computing for text. Manually constructed affective lexicons have limited scale and thus only have limited use in practical systems. In this work, we propose a regression-based method to automatically infer multi-dimensional affective representation of words via their word embedding based on a set of seed words. This method can make use of the rich semantic meanings obtained from word embedding to extract meanings in some specific semantic space. This is based on the assumption that different features in word embedding contribute differently to a particular affective dimension and a particular feature in word embedding contributes differently to different affective dimensions. Evaluation on various affective lexicons shows that our method outperforms the state-of-the-art methods on all the lexicons under different evaluation metrics with large margins. We also explore different regression models and conclude that the Ridge regression model, the Bayesian Ridge regression model and Support Vector Regression with linear kernel are the most suitable models. Comparing to other state-of-the-art methods, our method also has computation advantage. Experiments on a sentiment analysis task show that the lexicons extended by our method achieve better results than publicly available sentiment lexicons on eight sentiment corpora. The extended lexicons are publicly available for access.
We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.
Background/Aims: Recent research has attempted combinations of instruments to improve screening accuracy for mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). We compared Mini-Mental State Examination (MMSE), Immediate and Delayed Recall (Logical Memory I and II; LM-I and LM-II, respectively), a single-item informant report of memory problem (IRMP), and a four-item Instrumental Activities of Daily Living (4IADL) scale, and combinations of these tests. Method: The tests were administered together with Clinical Dementia Rating (CDR) to subjects who were cognitively intact (CDR = 0, n = 88), and with diagnoses of MCI (CDR = 0.5, n = 37) and early AD (CDR = 1–2, n = 19). Results: Screening accuracy (receiver operating characteristic area under curve, AUC) for identifying MCI or MCI-AD was lowest for MMSE (AUC 67.6% for MCI or 77.9% for MCI-AD), and better for IRMP (79.5 or 83.2%), 4IADL (76.9 or 84.7%), LM-I (81.2 or 87.1%) and LM-II (86.1 or 90.8%). Combining IRMP, 4IADL and LM-II was most accurate (AUC 91.7% for MCI or 94.5% for MCI-AD); sensitivity: 86.5 or 89.3%; specificity: 86.4 or 88.6%. However, combining IRMP and 4IADL gave nearly as good accuracy (AUC 87.2 or 91.6%); sensitivity: 86.5 or 85.7%; specificity: 79.5 or 85.2%. Conclusion: A brief instrument combining an IRMP and 4IADL items is potentially useful in screening for MCI and early AD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.