Production forecast can significantly influence field development planning and economic evaluation. Traditional methods including numerical simulations and decline curve analysis models (DCA) requires extensive domain knowledge or lack of flexibility in modeling complex physics. However, data-driven techniques using recursive neural networks (RNN) have proven very efficient and accurate in time-series forecasting related applications. This study implemented and compared RNN with DCA in production forecast of single and multiple wells. A typical RNN based long short-term memory (LSTM) models were first developed with various input and output sequences. Then, well-known DCA models such as Duong, Stretched Exponential Decline (SEPD), Power Law Exponential Decline (PLE) were implemented as reference solutions. Moreover, data cleaning process involves preparation of history production rates and well constraints for existing wells. For multiple wells, similar input parameters were aggregated together for adjacent wells before declining forecast using the former model. Finally, hold-out training and validation were performed, followed by comparison of model accuracy and efficiency. Various LSTM based sequence-to-sequence models such as one-to-one, many-to-one, and many-to-many were successfully implemented for production forecast. Feature engineering was performed to generate additional features to facilitate training process. It was observed better agreement for the blind-forecasting validation dataset (i.e., last 20% of the given history) between LSTM model prediction and history production than DCA based models. LSTM models captured the overall trend whereas DCA only produced smooth curves. In addition, LSTM based models yielded good matches for all three-phase rates whereas DCA was usually limited to a certain phase. Moreover, for multiple wells, a group of neighboring wells with variable history lengths were used for training the model to forecast the production rates, where the modeling process is similar as character translation in natural language processing. Finally, it was demonstrated that the developed RNN based sequence-to-sequence models will be readily extended to model other time-series related problems such as condition-based maintenance and failure prediction. This study proposed a novel approach to model time-series related problems (e.g., production forecast) using the RNN based sequence-to-sequence models. The developed data-driven approach makes the process of history matching and forecasting efficiency and accurate for assets with or without decent operation history information. In addition, the algorithms and case studies herein were developed with open-source libraries, which could be readily incorporated into either in-house or commercial packages.
This paper describes the AFRL-MITLL statistical machine translation systems and the improvements that were developed during the WMT16 evaluation campaign. New techniques applied this year include Neural Machine Translation, a unique selection process for language modelling data, additional out-of-vocabulary transliteration techniques, and morphology generation.
This paper describes the AFRL-MITLL machine translation systems and the improvements that were developed during the WMT17 evaluation campaign. This year, we explore the continuing proliferation of Neural Machine Translation toolkits, revisit our previous data-selection efforts for use in training systems with these new toolkits and expand our participation to the Russian-English, Turkish-English and Chinese-English translation pairs.
As the largest professional network, LinkedIn hosts millions of user profiles and job postings. Users effectively find what they need by entering search queries. However, finding what they are looking for can be a challenge, especially if they are unfamiliar with specific keywords from their industry. Query Suggestion is a popular feature where a search engine can suggest alternate, related queries. At LinkedIn, we have productionized a deep learning Seq2Seq model to transform an input query into several alternatives. This model is trained by examining search history directly typed by users. Once online, we can determine whether or not users clicked on suggested queries. This new feedback data indicates which suggestions caught the user's attention. In this work, we propose training a model with both the search history and user feedback datasets. We examine several ways to incorporate feedback without any architectural change, including adding a novel pairwise ranking loss term during training. The proposed new training technique produces the best combined score out of several alternatives in offline metrics. Deployed in the LinkedIn search engine, it significantly outperforms the control model with respect to key business metrics.
In this work, we propose a novel, implicitly-defined neural network architecture and describe a method to compute its components. The proposed architecture forgoes the causality assumption used to formulate recurrent neural networks and instead couples the hidden states of the network, allowing improvement on problems with complex, long-distance dependencies. Initial experiments demonstrate the new architecture outperforms both the Stanford Parser and baseline bidirectional networks on the Penn Treebank Part-ofSpeech tagging task and a baseline bidirectional network on an additional artificial random biased walk task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.