This paper targets on designing a query-based dataset recommendation system, which accepts a query denoting a user's research interest as a set of research papers and returns a list of recommended datasets that are ranked by the potential usefulness for the user's research need. The motivation of building such a system is to save users from spending time on heavy literature review work to find usable datasets. We start by constructing a two-layer network: one layer of citation network, and the other layer of datasets, connected to the firstlayer papers in which they were used. A query highlights a set of papers in the citation layer. However, answering the query as a naive retrieval of datasets linked with these highlighted papers excludes other semantically relevant datasets, which widely exist several hops away from the queried papers. We propose to learn representations of research papers and datasets in the two-layer network using heterogeneous variational graph autoencoder, and then compute the relevance of the query to the dataset candidates based on the learned representations. Our ranked datasets shown in extensive evaluation results are validated to be more truly relevant than those obtained by naive retrieval methods and adoptions of existing related solutions.
Ideally, time-lapse seismic data from different vintages should be identical except at the target area (i.e., the reservoir). However, it is almost impossible to have identical data because of many factors, such as different positioning of the sources and receivers and near-surface velocity variation, which result in 4D noise and reduce the repeatability of the data. To increase the 4D signal and reduce the noise, time-lapse cross equalization methods aim to match the monitor data to the baseline. Here, we propose to implement the cross equalization intelligently using deep learning models. We specifically use a convolutional autoencoder trained on the base data to later predict the matching using another fully connected neural network in the latent space. We implement the approach on a synthetic data and show an improvement in the repeatability by imaging the reservoir and computing the normalized root mean square.
Data intensive research requires the support of appropriate datasets. However, it is often time-consuming to discover usable datasets matching a specific research topic. We formulate the dataset discovery problem on an attributed heterogeneous graph, which is composed of paper-paper citation, paper-dataset citation and also paper content. We propose to characterize both paper and dataset nodes by their commonly shared latent topics, rather than learning user and item representations via canonical graph embedding models, because the usage of datasets and the themes of research projects can be understood on the common base of research topics. The relevant datasets to a given research project can then be inferred in the shared topic space. The experimental results show that our model can generate reasonable profiles for datasets, and recommend proper datasets for a query, which represents a research project linked with several papers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.