Most applications of underwater wireless sensor networks (UWSNs) demand reliable data delivery over a longer period in an efficient and timely manner. However, the harsh and unpredictable underwater environment makes routing more challenging as compared to terrestrial WSNs. Most of the existing schemes deploy mobile sensors or a mobile sink (MS) to maximize data gathering. However, the relatively high deployment cost prevents their usage in most applications. Thus, this paper presents an autonomous underwater vehicle (AUV)-aided efficient data-gathering (AEDG) routing protocol for reliable data delivery in UWSNs. To prolong the network lifetime, AEDG employs an AUV for data collection from gateways and uses a shortest path tree (SPT) algorithm while associating sensor nodes with the gateways. The AEDG protocol also limits the number of associated nodes with the gateway nodes to minimize the network energy consumption and to prevent the gateways from overloading. Moreover, gateways are rotated with the passage of time to balance the energy consumption of the network. To prevent data loss, AEDG allows dynamic data collection at the AUV depending on the limited number of member nodes that are associated with each gateway. We also develop a sub-optimal elliptical trajectory of AUV by using a connected dominating set (CDS) to further facilitate network throughput maximization. The performance of the AEDG is validated via simulations, which demonstrate the effectiveness of AEDG in comparison to two existing UWSN routing protocols in terms of the selected performance metrics.
Traditional handcrafted crowd-counting techniques in an image are currently transformed via machine-learning and artificial-intelligence techniques into intelligent crowd-counting techniques. This paradigm shift offers many advanced features in terms of adaptive monitoring and the control of dynamic crowd gatherings. Adaptive monitoring, identification/recognition, and the management of diverse crowd gatherings can improve many crowd-management-related tasks in terms of efficiency, capacity, reliability, and safety. Despite many challenges, such as occlusion, clutter, and irregular object distribution and nonuniform object scale, convolutional neural networks are a promising technology for intelligent image crowd counting and analysis. In this article, we review, categorize, analyze (limitations and distinctive features), and provide a detailed performance evaluation of the latest convolutional-neural-network-based crowd-counting techniques. We also highlight the potential applications of convolutional-neural-network-based crowd-counting techniques. Finally, we conclude this article by presenting our key observations, providing strong foundation for future research directions while designing convolutional-neural-network-based crowd-counting techniques. Further, the article discusses new advancements toward understanding crowd counting in smart cities using the Internet of Things (IoT).
In recent years, the use of Autonomous Underwater Vehicle (AUV) along a constrained path can improve the data delivery ratio and maximize the energy efficiency in Underwater Wireless Sensor Networks (UWSNs). However, constant speed of AUV leads to limited communication to collect data packet from nodes deployed randomly in large scalable network. Moreover, the excessive number of associated nodes with Gateway Node (GN) causes to quick depletion of its energy, thus lead to hot spot problem. This poses prominent challenges in jointly improving the throughput with minimum energy consumption. To address these issues, we presented a novel scalable data gathering scheme called Scalable and Efficient Data Gathering SEDG routing protocol, that increases the packet delivery ratio as well as conserves limited energy by optimal assignment of member nodes with GN. Moreover, the variable sojourn interval of AUV decreases the packet drop ratio and hence, maximize the throughput of network.
The accuracy of object-based computer vision techniques declines due to major challenges originating from large scale variation, varying shape, perspective variation, and lack of side information. To handle these challenges most of the crowd counting methods use multi-columns (restrict themselves to a set of specific density scenes), deploying a deeper and multi-networks for density estimation. However, these techniques suffer a lot of drawbacks such as extraction of identical features from multi-column, computationally complex architecture, overestimate the density estimation in sparse areas, underestimating in dense areas and averaging of feature maps result in reduced quality of density map. To overcome these drawbacks and to provide a state-of-the-art counting accuracy with comparable computational cost, we therefore propose a deeper and wider network: a Context-aware Scale Aggregation CNN-based Crowd Counting method (CASA-Crowd) to obtain the deep, varying scale and perspective varying features. Further, we include a dilated convolution with varying filter size to obtain contextual information. In addition, due to different dilation rates, a variation in receptive field size is more useful to overcome the perspective distortion. The quality of density map is enhanced while preserving the spatial dimension by obtaining a comparable computational complexity. We further evaluate our method on three well-known datasets: UCF_CC_50, ShanghaiTech Part_A, ShanghaiTech Part_B.INDEX TERMS Deep learning, convolutional neural networks, density estimation, crowd counting.
Crowd counting is a challenging task due to large perspective, density, and scale variations. CNN-based crowd counting techniques have achieved significant performance in sparse to dense environments. However, crowd counting in high perspective-varying scenes (images) is getting harder due to different density levels occupied by the same number of pixels. In this way large variations for objects in the same spatial area make it difficult to count accurately. Further, existing CNN-based crowd counting methods are used to extract rich deep features; however, these features are used locally and disseminated while propagating through intermediate layers. This results in high counting errors, especially in dense and high perspective-variation scenes. Further, class-specific responses along channel dimensions are underestimated. To address these above mentioned issues, we therefore propose a CNN-based dense feature extraction network for accurate crowd counting. Our proposed model comprises three main modules: (1) backbone network, (2) dense feature extraction modules (DFEMs), and (3) channel attention module (CAM). The backbone network is used to obtain general features with strong transfer learning ability. The DFEM is composed of multiple sub-modules called dense stacked convolution modules (DSCMs), densely connected with each other. In this way features extracted from lower and middle-lower layers are propagated to higher layers through dense connections. In addition, combinations of task independent general features obtained by the former modules and task-specific features obtained by later ones are incorporated to obtain high counting accuracy in large perspective-varying scenes. Further, to exploit the class-specific response between background and foreground, CAM is incorporated at the end to obtain high-level features along channel dimensions for better counting accuracy. Moreover, we have evaluated the proposed method on three well known datasets: Shanghaitech (Part-A), Shanghaitech (Part-B), and Venice. The performance of the proposed technique justifies its relative effectiveness in terms of selected performance compared to state-of-the-art techniques.
Extracting meaningful information on objects varying scale and shape is a challenging task while obtaining distinctive features on small to large size objects to enhance overall object segmentation accuracy from 3D point cloud. To handle this challenge, we propose an attention-based multi-scale atrous convolutional neural network (AMSASeg) for object segmentation from 3D point cloud. Specifically, a backbone network consists of three modules: distinctive atrous spatial pyramid pooling (DASPP), FireModule, and FireDeconv. The DASPP utilizes average pooling operations and atrous convolutions with different sizes to aggregate distinctive information on objects at multiple scales. The FireModule and FireDeconv are responsible to efficiently extract general features. Meanwhile, a spatial attention module (SAM) and channel attention module (CAM) aggregate spatial and semantic information on smaller objects from low-level and high-level layers, respectively. Our network enables to encode multi-scale information and extract distinct feature on overall objects to enhance segmentation performance. We evaluate our method on KITTI dataset. Experimental results demonstrate that the proposed network is effective to improve segmentation performance on small to large objects at real-time speed. INDEX TERMSDeep learning, Convolutional neural network, Object segmentation, 3D point cloud, Autonomous vehicles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.