Measuring fluid characteristics is of high importance in various industries such as the polymer, petroleum, and petrochemical industries, etc. Flow regime classification and void fraction measurement are essential for predicting the performance of many systems. The efficiency of multiphase flow meters strongly depends on the flow parameters. In this study, MCNP (Monte Carlo N-Particle) code was employed to simulate annular, stratified, and homogeneous regimes. In this approach, two detectors (NaI) were utilized to detect the emitted photons from a cesium-137 source. The registered signals of both detectors were decomposed using a discrete wavelet transform (DWT). Following this, the low-frequency (approximation) and high-frequency (detail) components of the signals were calculated. Finally, various features of the approximation signals were extracted, using the average value, kurtosis, standard deviation (STD), and root mean square (RMS). The extracted features were thoroughly analyzed to find those features which could classify the flow regimes and be utilized as the inputs to a network for improving the efficiency of flow meters. Two different networks were implemented for flow regime classification and void fraction prediction. In the current study, using the wavelet transform and feature extraction approach, the considered flow regimes were classified correctly, and the void fraction percentages were calculated with a mean relative error (MRE) of 0.4%. Although the system presented in this study is proposed for measuring the characteristics of petroleum fluids, it can be easily used for other types of fluids such as polymeric fluids.
Automatic image classification has become a necessary task to handle the rapidly growing digital image usage. It has branched out many algorithms and adopted new techniques. Among them, feature fusion-based image classification methods rely on hand-crafted features traditionally. However, it has been proven that the bottleneck features extracted through pretrained convolutional neural networks (CNNs) can improve the classification accuracy. Thence, this study analyses the effect of fusing such cues from multiple architectures without being tied to any hand-crafted features. First, the CNN features are extracted from three different pre-trained models, namely AlexNet, VGG-16, and Inception-V3. Then, a generalised feature space is formed by employing principal component reconstruction and energy-level normalisation, where the features from individual CNN are mapped into a common subspace and embedded using arithmetic rules to construct fused feature vectors (FFVs). This transformation play a vital role in creating a representation that is appearance invariant by capturing complementary information of different high-level features. Finally, a multi-class linear support vector machine is trained. The experimental results demonstrate that such multi-modal CNN feature fusion is well suited for image/object classification tasks, but surprisingly it has not been explored so far by the computer vision research community extensively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.