This paper presents the SCOPE (Scientific Compound Object Publishing and Editing) system which is designed to enable scientists to easily author, publish and edit scientific compound objects. Scientific compound objects encapsulate the various datasets and resources generated or utilized during a scientific experiment or discovery process, within a single compound object, for publishing and exchange. The adoption of “named graphs” to represent these compound objects enables provenance information to be captured via the typed relationships between the components. This approach is also endorsed by the OAI-ORE initiative and hence ensures that we generate OAI-ORE-compliant Scientific Compound Objects. The SCOPE system is an extension of the Provenance Explorer tool – which supports access-controlled viewing of scientific provenance trails. Provenance Explorer provided dynamic rendering of RDF graphs of scientific discovery processes, showing the lineage from raw data to publication. Views of different granularity can be inferred automatically using SWRL (Semantic Web Rules Language) rules and an inferencing engine. SCOPE extends the Provenance Explorer tool and GUI by: 1) Adding an embedded web browser that can be used for incorporating objects discoverable via the Web; 2) Representing compound objects as Named Graphs, that can be saved in RDF, TriX, TriG or as an Atom syndication feed; 3) Enabling scientists to attach Creative Commons Licenses to the compound objects to specify how they may be re-used; 4) Enabling compound objects to be published as Fedora Object XML (FOXML) files within a Fedora digital library.
Deep learning based image hashing methods learn hash codes by using powerful feature extractors and nonlinear transformations to achieve highly efficient image retrieval. For most end-to-end deep hashing methods, the supervised learning process relies on pair-wise or triplet-wise information to provide an internal relationship of similarity data. However, the use of pair-wise and triplet loss function is limited not only by expensive training costs but also by quantization errors. In this paper, we propose a novel semantic learning based hashing method for image retrieval to optimize the deep features structure in the hash space from a perspective of angular view. Specifically, we proposed an angular hashing loss function that explicitly improve intra-class compactness and inter-class separability between features in hash space. Geometrically, angular hashing loss can be regarded as imposing hash constraints on hypersphere manifold. In order to solve the training problem on the multi-label case, we further designed a dynamic Softmax training strategy that can directly train the network using gradient descent method. Extensive experiments on two well-known datasets of CIFAR-10 and NUS-WIDE demonstrate that the proposed Angular Deep Supervised Hashing (ADSH) method can generate high-quality and compact binary codes, which can achieve state-of-the-art performance as compared with conventional image hashing and deep learning-based hashing methods.
Abstract. This paper presents Provenance Explorer, a secure provenance visualization tool, designed to dynamically generate customized views of scientific data provenance that depend on the viewer's requirements and/or access privileges. Using RDF and graph visualizations, it enables scientists to view the data, states and events associated with a scientific workflow in order to understand the scientific methodology and validate the results. Initially the Provenance Explorer presents a simple, coarse-grained view of the scientific process or experiment. However the GUI allows permitted users to expand links between nodes (input states, events and output states) to reveal more finegrained information about particular sub-events and their inputs and outputs. Access control is implemented using Shibboleth to identify and authenticate users and XACML to define access control policies. The system also provides a platform for publishing scientific results. It enables users to select particular nodes within the visualized workflow and drag-and-drop them into an RDF package for publication or e-learning. The direct relationships between the individual components selected for such packages are inferred by the ruleinference engine.
Deep reinforcement learning (DRL) has been utilized in numerous computer vision tasks, such as object detection, autonomous driving, etc. However, relatively few DRL methods have been proposed in the area of image segmentation, particularly in left ventricle segmentation. Reinforcement learning-based methods in earlier works often rely on learning proper thresholds to perform segmentation, and the segmentation results are inaccurate due to the sensitivity of the threshold. To tackle this problem, a novel DRL agent is designed to imitate the human process to perform LV segmentation. For this purpose, we formulate the segmentation problem as a Markov decision process and innovatively optimize it through DRL. The proposed DRL agent consists of two neural networks, i.e., First-P-Net and Next-P-Net. The First-P-Net locates the initial edge point, and the Next-P-Net locates the remaining edge points successively and ultimately obtains a closed segmentation result. The experimental results show that the proposed model has outperformed the previous reinforcement learning methods and achieved comparable performances compared with deep learning baselines on two widely used LV endocardium segmentation datasets, namely Automated Cardiac Diagnosis Challenge (ACDC) 2017 dataset, and Sunnybrook 2009 dataset. Moreover, the proposed model achieves higher F-measure accuracy compared with deep learning methods when training with a very limited number of samples.
Deep Learning based image quality assessment (IQA) has been shown to greatly improve the quality score prediction accuracy of images with single distortion. However, because these models lack generalizability and the accuracy of multidistortion-based image data is relatively low, designing reliable IQA systems is still an open issue. In this paper, we propose to introduce long-range dependencies between local artifacts and high-order spatial pooling into a convolutional neural network (CNN) model to improve the performance and generalizability of the full-reference IQA (FR-IQA). This long-range dependencies model is based on the hypothesis that local apparent artifacts can affect the overall image quality. The proposed network architecture adopts a non-local means algorithm to establish connections between all positions in the deep feature space and uses the Minkowski function to improve the non-linearity of the spatial pooling. Based on this architecture, a robust FR-IQA system has been constructed and evaluated on three well-known singledistortion-based IQA databases (LIVE, CSIQ, and TID2013) and a multidistortion-based IQA database (MDID). Experimental results demonstrate that, compared with the latest FR-IQA systems, the proposed long-range dependencies-boosted CNN-based FR-IQA system can achieve state-of-the-art performance. A comprehensive cross-database evaluation also shows that the proposed system is sufficiently generalized between different databases and multidistortion-based image data is more useful for training robust image quality metrics. INDEX TERMS Full-reference image quality assessment, quantization, long-range dependencies, convolutional neural networks, spatial pooling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.