Recently, significant progress has been made in sequential recommendation with deep learning. Existing neural sequential recommendation models usually rely on the item prediction loss to learn model parameters or data representations. However, the model trained with this loss is prone to suffer from data sparsity problem. Since it overemphasizes the final performance, the association or fusion between context data and sequence data has not been well captured and utilized for sequential recommendation. To tackle this problem, we propose the model S 3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture. The main idea of our approach is to utilize the intrinsic data correlation to derive self-supervision signals and enhance the data representations via pre-training methods for improving sequential recommendation. For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence by utilizing the mutual information maximization (MIM) principle. MIM provides a unified way to characterize the correlation between different types of data, which is particularly suitable in our scenario. Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods, especially when only limited training data is available. Besides, we extend our self-supervised learning method to other recommendation models, which also improve their performance. CCS CONCEPTS • Information systems → Recommender systems. † Equal contribution.
Learning high-quality sentence representations benefits a wide range of natural language processing tasks. Though BERT-based pretrained language models achieve high performance on many downstream tasks, the native derived sentence representations are proved to be collapsed and thus produce a poor performance on the semantic textual similarity (STS) tasks. In this paper, we present ConSERT, a Contrastive Framework for Self-Supervised SEntence Representation Transfer, that adopts contrastive learning to fine-tune BERT in an unsupervised and effective way. By making use of unlabeled texts, ConSERT solves the collapse issue of BERT-derived sentence representations and make them more applicable for downstream tasks. Experiments on STS datasets demonstrate that ConSERT achieves an 8% relative improvement over the previous state-of-the-art, even comparable to the supervised SBERT-NLI. And when further incorporating NLI supervision, we achieve new stateof-the-art performance on STS tasks. Moreover, ConSERT obtains comparable results with only 1000 samples available, showing its robustness in data scarcity scenarios.
Learning high-quality sentence representations benefits a wide range of natural language processing tasks. Though BERT-based pretrained language models achieve high performance on many downstream tasks, the native derived sentence representations are proved to be collapsed and thus produce a poor performance on the semantic textual similarity (STS) tasks. In this paper, we present ConSERT, a Contrastive Framework for Self-Supervised SEntence Representation Transfer, that adopts contrastive learning to fine-tune BERT in an unsupervised and effective way. By making use of unlabeled texts, ConSERT solves the collapse issue of BERT-derived sentence representations and make them more applicable for downstream tasks. Experiments on STS datasets demonstrate that ConSERT achieves an 8% relative improvement over the previous state-of-the-art, even comparable to the supervised SBERT-NLI. And when further incorporating NLI supervision, we achieve new stateof-the-art performance on STS tasks. Moreover, ConSERT obtains comparable results with only 1000 samples available, showing its robustness in data scarcity scenarios.
Amazonian peatlands store a large amount of soil organic carbon (SOC), and its fate under a future changing climate is unknown. Here, we use a process-based peatland biogeochemistry model to quantify the carbon accumulation for peatland and nonpeatland ecosystems in the Pastaza-Marañon foreland basin (PMFB) in the Peruvian Amazon from 12,000 y before present to AD 2100. Model simulations indicate that warming accelerates peat SOC loss, while increasing precipitation accelerates peat SOC accumulation at millennial time scales. The uncertain parameters and spatial variation of climate are significant sources of uncertainty to modeled peat carbon accumulation. Under warmer and presumably wetter conditions over the 21st century, SOC accumulation rate in the PMFB slows down to 7.9 (4.3–12.2) g⋅C⋅m−2⋅y−1 from the current rate of 16.1 (9.1–23.7) g⋅C⋅m−2⋅y−1, and the region may turn into a carbon source to the atmosphere at −53.3 (−66.8 to −41.2) g⋅C⋅m−2⋅y−1 (negative indicates source), depending on the level of warming. Peatland ecosystems show a higher vulnerability than nonpeatland ecosystems, as indicated by the ratio of their soil carbon density changes (ranging from 3.9 to 5.8). This is primarily due to larger peatlands carbon stocks and more dramatic responses of their aerobic and anaerobic decompositions in comparison with nonpeatland ecosystems under future climate conditions. Peatland and nonpeatland soils in the PMFB may lose up to 0.4 (0.32–0.52) Pg⋅C by AD 2100 with the largest loss from palm swamp. The carbon-dense Amazonian peatland may switch from a current carbon sink into a source in the 21st century.
Accurate oil market forecasting plays an important role in the theory and application of oil supply chain management for profit maximization and risk minimization. However, the coronavirus disease 2019 (COVID-19) has compelled governments worldwide to impose restrictions, consequently forcing the closure of most social and economic activities. The latter leads to the volatility of the oil markets and poses a huge challenge to oil market forecasting. Fortunately, the social media information can finely reflect oil market factors and exogenous factors, such as conflicts and political instability. Accordingly, this study collected vast online oil news and used convolutional neural network to extract relevant information automatically. Oil markets are divided into four categories: oil price, oil production, oil consumption, and oil inventory. A total of 16,794; 9,139; 8,314; and 8,548 news headlines were collected in four respective cases. Experimental results indicate that social media information contributes to the forecasting of oil price, oil production and oil consumption. The mean absolute percentage errors are respectively 0.0717, 0.0144 and 0.0168 for the oil price, production, and consumption prediction during the COVID-19 pandemic. Marketers must consider the impact of social media information on the oil or similar markets, especially during the COVID-19 outbreak.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.