This paper presents a novel context-based keyword propagation method for automatic image annotation. We follow the idea of keyword propagation and formulate image annotation as a multi-label learning problem, which is further resolved efficiently by linear programming. In this way, our method can exploit the context between keywords during keyword propagation. Unlike the popular relevance models that treat each keyword independently, our method can simultaneously propagate multiple keywords (i.e. labels) from the training images to the test images using their similarities. Moreover, we present a new 2D string kernel, called spatial spectrum kernel, to take into account another type of context when defining the similarity between images for keyword propagation. Each image is first denoted as a 2D sequence of visual keywords which are obtained through dividing images into blocks and then clustering these blocks, and a spatial spectrum kernel is then proposed to measure the 2D sequence similarity based on shared occurrences of s-length 1D subsequences through decomposing each 2D sequence into two parallel 1D sequences (i.e. the row-wise and column-wise ones). That is, we incorporate the context between visual keywords into the similarity between images (i.e. 2D sequences) used for keyword propagation. Experiments on two standard image databases demonstrate that the proposed method for automatic image annotation outperforms the state-of-the-art methods.
By modeling the context information, ELMo and BERT have successfully improved the state-of-the-art of word representation, and demonstrated their effectiveness on the Named Entity Recognition task. In this paper, in addition to such context modeling, we propose to encode the prior knowledge of entities from an external knowledge base into the representation, and introduce a Knowledge-Graph Augmented Word Representation or KAWR for named entity recognition. Basically, KAWR provides a kind of knowledge-aware representation for words by 1) encoding entity information from a pre-trained KG embedding model with a new recurrent unit (GERU), and 2) strengthening context modeling from knowledge wise by providing a relation attention scheme based on the entity relations defined in KG. We demonstrate that KAWR, as an augmented version of the existing linguistic word representations, promotes F1 scores on 5 datasets in various domains by +0.46∼+2.07. Better generalization is also observed for KAWR on new entities that cannot be found in the training sets.
This paper proposes a nonlinear regression model to predict soft tissue deformation after maxillofacial surgery. The feature which served as input in the model is extracted with Finite Element Model (FEM). The output in the model is the facial deformation calculated from the preoperative and postoperative 3D data. After finding the relevance between feature and facial deformation by using the regression model, we establish a general relationship which can be applied to all the patients. As a new patient comes, we predict his/her facial deformation by combining the general relationship and the new patient’s biomechanical properties. Thus, our model is biomechanical relevant and statistical relevant. Validation on eleven patients demonstrates the effectiveness and efficiency of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.