Abstract:The injected fluids in secondary processes supplement the natural energy present in the reservoir to displace oil. The recovery efficiency mainly depends on the mechanism of pressure maintenance. However, the injected fluids in tertiary or enhanced oil recovery (EOR) processes interact with the reservoir rock/oil system. Thus, EOR techniques are receiving substantial attention worldwide as the available oil resources are declining. However, some challenges, such as low sweep efficiency, high costs and potential formation damage, still hinder the further application of these EOR technologies. Current studies on nanoparticles are seen as potential solutions to most of the challenges associated with these traditional EOR techniques. This paper provides an overview of the latest studies about the use of nanoparticles to enhance oil recovery and paves the way for researchers who are interested in the integration of these progresses. The first part of this paper addresses studies about the major EOR mechanisms of nanoparticles used in the forms of nanofluids, nanoemulsions and nanocatalysts, including disjoining pressure, viscosity increase of injection fluids, preventing asphaltene precipitation, wettability alteration and interfacial tension reduction. This part is followed by a review of the most important research regarding various novel nano-assisted EOR methods where nanoparticles are used to target various existing thermal, chemical and gas methods. Finally, this review identifies the challenges and opportunities for future study regarding application of nanoparticles in EOR processes.
Many NLP tasks such as tagging and machine reading comprehension (MRC) are faced with the severe data imbalance issue: negative examples significantly outnumber positive ones, and the huge number of easy-negative examples overwhelms training. The most commonly used cross entropy criteria is actually accuracy-oriented, which creates a discrepancy between training and test. At training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples.
Epidemiological studies suggest that insulin resistance accelerates progression of age-based cognitive impairment, which neuroimaging has linked to brain glucose hypometabolism. As cellular inputs, ketones increase Gibbs free energy change for ATP by 27% compared to glucose. Here we test whether dietary changes are capable of modulating sustained functional communication between brain regions (network stability) by changing their predominant dietary fuel from glucose to ketones. We first established network stability as a biomarker for brain aging using two large-scale (n= 292, ages 20 to 85 y;n= 636, ages 18 to 88 y) 3 T functional MRI (fMRI) datasets. To determine whether diet can influence brain network stability, we additionally scanned 42 adults, age < 50 y, using ultrahigh-field (7 T) ultrafast (802 ms) fMRI optimized for single-participant-level detection sensitivity. One cohort was scanned under standard diet, overnight fasting, and ketogenic diet conditions. To isolate the impact of fuel type, an independent overnight fasted cohort was scanned before and after administration of a calorie-matched glucose and exogenous ketone ester (d-β-hydroxybutyrate) bolus. Across the life span, brain network destabilization correlated with decreased brain activity and cognitive acuity. Effects emerged at 47 y, with the most rapid degeneration occurring at 60 y. Networks were destabilized by glucose and stabilized by ketones, irrespective of whether ketosis was achieved with a ketogenic diet or exogenous ketone ester. Together, our results suggest that brain network destabilization may reflect early signs of hypometabolism, associated with dementia. Dietary interventions resulting in ketone utilization increase available energy and thus may show potential in protecting the aging brain.
Blending poly(lactic acid) (PLA) with polyhydroxybutyrate-valerate (PHBV) presents a practical approach to producing fully biobased blends with tailored material properties and improved foam morphologies. This study investigated the effects of the PLA/PHBV blend composition on the morphology, as well as the thermal and mechanical properties, of both solid and microcellular PLA/PHBV injection molded components. Nitrogen (N2) in the supercritical state was used as the physical blowing agent for the microcellular injection molding experiments. Thermal analysis results showed no difference in the thermal properties of solid and microcellular injection molded specimens. It was also found that the T g of the PLA phase in the PLA/PHBV blends decreased with increasing PHBV content for both solid and microcellular specimens. In addition, PHBV content exceeding 45% significantly increased the crystallinity of PHBV in the PLA/PHBV blends and improved the storage modulus of both solid and microcellular components. PLA/PHBV blends were immiscible when the content of PHBV exceeded 30%; PLA/PHBV blends were only miscible with a low weight ratio of PHBV. The increase of PHBV content significantly decreased the cell size and increased the cell density in the microcellular specimens and resulted in some interesting bimodal microcellular structures within the PLA/PHBV (70:30) blend. Additionally, adding PHBV decreased the tensile strength slightly for both solid and microcellular specimens. Furthermore, adding PHBV did not cause any significant changes in the modulus of the solid or microcellular specimens.
No abstract
BackgroundIntensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement.MethodsIn this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR.ResultsWe performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing.ConclusionsWe have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Segmenting a chunk of text into words is usually the first step of processing Chinese text, but its necessity has rarely been explored. In this paper, we ask the fundamental question of whether Chinese word segmentation (CWS) is necessary for deep learning-based Chinese Natural Language Processing. We benchmark neural word-based models which rely on word segmentation against neural char-based models which do not involve word segmentation in four end-to-end NLP benchmark tasks: language modeling, machine translation, sentence matching/paraphrase and text classification. Through direct comparisons between these two types of models, we find that charbased models consistently outperform wordbased models. Based on these observations, we conduct comprehensive experiments to study why wordbased models underperform char-based models in these deep learning-based NLP tasks. We show that it is because word-based models are more vulnerable to data sparsity and the presence of out-of-vocabulary (OOV) words, and thus more prone to overfitting. We hope this paper could encourage researchers in the community to rethink the necessity of word segmentation in deep learning-based Chinese Natural Language Processing. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.