Image classification of a visual scene based on visibility is significant due to the rise in readily available automated solutions. Currently, there are only two known spectrums of image visibility i.e., dark, and bright. However, normal environments include semi-dark scenarios. Hence, visual extremes that will lead to the accurate extraction of image features should be duly discarded. Fundamentally speaking there are two broad methods to perform visual scene-based image classification, i.e., machine learning (ML) methods and computer vision methods. In ML, the issues of insufficient data, sophisticated hardware and inadequate image classifier training time remain significant problems to be handled. These techniques fail to classify the visual scene-based images with high accuracy. The other alternative is computer vision (CV) methods, which also have major issues. CV methods do provide some basic procedures which may assist in such classification but, to the best of our knowledge, no CV algorithm exists to perform such classification, i.e., these do not account for semi-dark images in the first place. Moreover, these methods do not provide a well-defined protocol to calculate images’ content visibility and thereby classify images. One of the key algorithms for calculation of images’ content visibility is backed by the HSL (hue, saturation, lightness) color model. The HSL color model allows the visibility calculation of a scene by calculating the lightness/luminance of a single pixel. Recognizing the high potential of the HSL color model, we propose a novel framework relying on the simple approach of the statistical manipulation of an entire image’s pixel intensities, represented by HSL color model. The proposed algorithm, namely, Relative Perceived Luminance Classification (RPLC) uses the HSL (hue, saturation, lightness) color model to correctly identify the luminosity values of the entire image. Our findings prove that the proposed method yields high classification accuracy (over 78%) with a small error rate. We show that the computational complexity of RPLC is much less than that of the state-of-the-art ML algorithms.
With the widespread of blockchain technology, preserving the anonymity and confidentiality of transactions have become crucial. An enormous portion of blockchain research is dedicated to the design and development of privacy protocols but not much has been achieved for proper assessment of these solutions. To mitigate the gap, we have first comprehensively classified the existing solutions based on blockchain fundamental building blocks (i.e., smart contracts, cryptography, and hashing). Next, we investigated the evaluation criteria used for validating these techniques. The findings depict that the majority of privacy solutions are validated based on computing resources i.e., memory, time, storage, throughput, etc., only, which is not sufficient. Hence, we have additionally identified and presented various other factors that strengthen or weaken blockchain privacy. Based on those factors, we have formulated an evaluation framework to analyze the efficiency of blockchain privacy solutions. Further, we have introduced a concept of privacy precision that is a quantifiable measure to empirically assess privacy efficiency in blockchains. The calculation of privacy precision will be based on the effectiveness and strength of various privacy protecting attributes of a solution and the associated risks. Finally, we conclude the paper with some open research challenges and future directions. Our study can serve as a benchmark for empirical assessment of blockchain privacy.
No abstract
Super-pixels represent perceptually similar visual feature vectors of the image. Super-pixels are the meaningful group of pixels of the image, bunched together based on the color and proximity of singular pixel. Computation of super-pixels is highly affected in terms of accuracy if the image has high pixel intensities, i.e., a semi-dark image is observed. For computation of super-pixels, a widely used method is SLIC (Simple Linear Iterative Clustering), due to its simplistic approach. The SLIC is considerably faster than other state-of-the-art methods. However, it lacks in functionality to retain the content-aware information of the image due to constrained underlying clustering technique. Moreover, the efficiency of SLIC on semi-dark images is lower than bright images. We extend the functionality of SLIC to several computational distance measures to identify potential substitutes resulting in regular and accurate image segments. We propose a novel SLIC extension, namely, SLIC++ based on hybrid distance measure to retain content-aware information (lacking in SLIC). This makes SLIC++ more efficient than SLIC. The proposed SLIC++ does not only hold efficiency for normal images but also for semi-dark images. The hybrid content-aware distance measure effectively integrates the Euclidean super-pixel calculation features with Geodesic distance calculations to retain the angular movements of the components present in the visual image exclusively targeting semi-dark images. The proposed method is quantitively and qualitatively analyzed using the Berkeley dataset. We not only visually illustrate the benchmarking results, but also report on the associated accuracies against the ground-truth image segments in terms of boundary precision. SLIC++ attains high accuracy and creates content-aware super-pixels even if the images are semi-dark in nature. Our findings show that SLIC++ achieves precision of 39.7%, outperforming the precision of SLIC by a substantial margin of up to 8.1%.
Wind-waves exhibit variations both in shape and steepness, and their asymmetrical nature is a well-known feature. One of the important characteristics of the sea surface is the front-back asymmetry of wind-wave crests. The wind-wave conditions on the surface of the sea constitute a sea state, which is listed as an essential climate variable by the Global Climate Observing System and is considered a critical factor for structural safety and optimal operations of offshore oil and gas platforms. Methods such as statistical representations of sensor-based wave parameters observations and numerical modeling are used to classify sea states. However, for offshore structures such as oil and gas platforms, these methods induce high capital expenditures (CAPEX) and operating expenses (OPEX), along with extensive computational power and time requirements. To address this issue, in this paper, we propose a novel, low-cost deep learning-based sea state classification model using visual-range sea images. Firstly, a novel visual-range sea state image dataset was designed and developed for this purpose. The dataset consists of 100,800 images covering four sea states. The dataset was then benchmarked on state-of-the-art deep learning image classification models. The highest classification accuracy of 81.8% was yielded by NASNet-Mobile. Secondly, a novel sea state classification model was proposed. The model took design inspiration from GoogLeNet, which was identified as the optimal reference model for sea state classification. Systematic changes in GoogLeNet’s inception block were proposed, which resulted in an 8.5% overall classification accuracy improvement in comparison with NASNet-Mobile and a 7% improvement from the reference model (i.e., GoogLeNet). Additionally, the proposed model took 26% less training time, and its per-image classification time remains competitive.
Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.