In this work, we present a simple method of "walking in place" (WIP) using the Microsoft Kinect to explore a virtual environment (VE) with a head-mounted display (HMD). Other studies have shown that WIP to explore a VE is equivalent to normal walking in terms of spatial orientation. This suggests that WIP is a promising way to explore a large VE. The Microsoft Kinect sensor is a great tool for implementing WIP because it enables real time skeletal tracking and is relatively inexpensive (150 USD). However, the skeletal information obtained from Kinect sensors can be noisy. Thus, in this work, we discuss how we combined the data from two Kinects to implement a robust WIP algorithm. As part of our analysis on how best to implement WIP with the Kinect, we compare Gaze direction locomotion to Torso direction locomotion. We report that participants' spatial orientation was better when they translated forward in the VE in the direction they were looking.
Automatic dialogue coherence evaluation has attracted increasing attention and is crucial for developing promising dialogue systems. However, existing metrics have two major limitations: (a) they are mostly trained in a simplified two-level setting (coherent vs. incoherent), while humans give Likert-type multi-level coherence scores, dubbed as "quantifiable"; (b) their predicted coherence scores cannot align with the actual human rating standards due to the absence of human guidance during training. To address these limitations, we propose Quantifiable Dialogue Coherence Evaluation (QuantiDCE), a novel framework aiming to train a quantifiable dialogue coherence metric that can reflect the actual human rating standards. Specifically, QuantiDCE includes two training stages, Multi-Level Ranking (MLR) pre-training and Knowledge Distillation (KD) fine-tuning. During MLR pre-training, a new MLR loss is proposed for enabling the model to learn the coarse judgement of coherence degrees. Then, during KD fine-tuning, the pretrained model is further finetuned to learn the actual human rating standards with only very few human-annotated data. To advocate the generalizability even with limited finetuning data, a novel KD regularization is introduced to retain the knowledge learned at the pre-training stage. Experimental results show that the model trained by QuantiDCE presents stronger correlations with human judgements than the other state-of-the-art metrics. 1
Abstract. Pseudo relevance feedback (PRF) via query expansion has proven to be effective in many information retrieval tasks. Most existing approaches are based on the assumption that the most informative terms in top-ranked documents from the first-pass retrieval can be viewed as the context of the query, and thus can be used to specify the information need. However, there may be irrelevant documents used in PRF (especially for hard topics), which can bring noise into the feedback process. The recent development of Web 2.0 technologies on Internet has provided an opportunity to enhance PRF as more and more high-quality resources can be freely obtained. In this paper, we propose a generative model to select high-quality feedback terms from social annotation tags. The main advantages of our proposed feedback model are as follows. First, our model explicitly explains how each feedback term is generated. Second, our model can take advantage of the human-annotated semantic relationship among terms. Experimental results on three TREC test datasets show that social annotation tags can be used as a good external resource for PRF. It is as good as the top-ranked documents from first-pass retrieval with optimal parameter setting on the WSJ dataset. When we combine the top-ranked documents and the social annotation tags, the retrieval performance can be further improved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.