Purpose -The purpose of this study is to propose a knowledge-based mobile learning framework that integrates various types of museum-wide content, and supports ubiquitous, context-aware, personalized learning for museums. Design/methodology/approach -A unified knowledge base with multi-layer reusable content structures serves as the kernel component to integrate content from exhibitions for education and collection in a museum. The How-Net approach is adopted to build a unified natural and cultural ontology. The ontology functions as a common and sharable knowledge concept that denotes each knowledge element in the unified knowledge base, and associates each learner's learning context and usage with a content and usage profile respectively. Data mining algorithms, e.g. association mining and clustering, are applied to discover useful patterns for ubiquitous personalization from these content and usage profiles. Findings -A pilot project based on the proposed framework has been successfully implemented in the Life Science Hall of the National Museum of Natural Science (NMNS), Taiwan, demonstrating the feasibility of this framework. Originality/value -This study proposes a mobile learning framework that can be replicated in many museums. This framework improves learners' learning experiences with rich related content, and with ubiquitous, proactive and adaptive services. Museums can also benefit from implementing this framework through outreach services for educational, promoting and usability needs from combining mobile and Internet communication technologies and learning services.
Neural Machine Translation (NMT) has been widely adopted recently due to its advantages compared with the traditional Statistical Machine Translation (SMT). However, an NMT system still often produces translation failures due to the complexity of natural language and sophistication in designing neural networks. While in-house black-box system testing based on reference translations (i.e., examples of valid translations) has been a common practice for NMT quality assurance, an increasingly critical industrial practice, named in-vivo testing, exposes unseen types or instances of translation failures when real users are using a deployed industrial NMT system. To fill the gap of lacking test oracle for in-vivo testing of an NMT system, in this paper, we propose a new approach for automatically identifying translation failures, without requiring reference translations for a translation task; our approach can directly serve as a test oracle for in-vivo testing. Our approach focuses on properties of natural language translation that can be checked systematically and uses information from both the test inputs (i.e., the texts to be translated) and the test outputs (i.e., the translations under inspection) of the NMT system. Our evaluation conducted on real-world datasets shows that our approach can effectively detect targeted property violations as translation failures. Our experiences on deploying our approach in both production and development environments of WeChat (a messenger app with over one billion monthly active users) demonstrate high effectiveness of our approach along with high industry impact.
Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in “The writer picked up the pen from the floor and moved it to the desk,” the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a “look-and-listen” task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension.
PurposeThis paper sets out to present a new model to avoid the content silo trap, satisfy the knowledge management requirement and support the long‐term perspective of developing academic, exhibition, and education applications among various domains for museums.Design/methodology/approachThis paper presents a unified knowledge‐based content management (UKCM) model, which comprises the unified knowledge content processes, multi‐layer reusable knowledge content structures and an integrated knowledge‐based content management system to solve the content silo trap problem. The extended entity‐relationship (EER) conceptual model is applied to design a global view of the integrated knowledge system and completely represent multi‐layer reusable knowledge content structures for the spectrum of various knowledge assets for all domains and applications in a museum.FindingsA practical case of a large‐scale digital archives project that includes various domains of a natural science museum has been successfully implemented to demonstrate the feasibility of the proposed model.Originality/valueThis paper integrates content management and knowledge management. Digital archives programs in museums can apply the model presented in this study to satisfy the knowledge management requirement and support the long‐term perspective of developing academic, exhibition, and education applications among various domains.
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.