Abstract:Vehicular Named Data Network (VNDN) is considered a strong paradigm to deploy in vehicular applications. In VNDN, each node has its cache, but due to limited cache, it directly affects the performance in a highly dynamic environment, which requires massive and fast content delivery. To reduce these issues, the cooperative caching plays an efficient role in VNDN. Most studies regarding cooperative caching focus on content replacement and caching algorithms and implement these methods in a static environment rat… Show more
“…DCCMS introduces an innovative approach to a cache-management technique that prioritizes content based on popularity and social interactions among nodes. It incorporates a master node concept for hierarchical collaboration and content distribution, focusing on maximizing the use of cache resources and minimizing content delivery latency [17]. LFU is a caching algorithm that removes the least frequently used items from the cache first.…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, a dynamic cooperative cache management scheme [17] was suggested, relying on popular and social data and involving a master node that operates hierarchically with nearby nodes to retain frequently accessed contents. However, in this scheme, the master node may experience a bottleneck during high network activity.…”
The named data vehicular sensor network (NDVSN) has become an increasingly important area of research because of the increasing demand for data transmission in vehicular networks. In such networks, ensuring the quality of service (QoS) of data transmission is essential. The NDVSN is a mobile ad hoc network that uses vehicles equipped with sensors to collect and disseminate data. QoS is critical in vehicular networks, as the data transmission must be reliable, efficient, and timely to support various applications. This paper proposes a QoS-aware forwarding and caching algorithm for NDVSNs, called QWLCPM (QoS-aware Forwarding and Caching using Weighted Linear Combination and Proximity Method). QWLCPM utilizes a weighted linear combination and proximity method to determine stable nodes and the best next-hop forwarding path based on various metrics, including priority, signal strength, vehicle speed, global positioning system data, and vehicle ID. Additionally, it incorporates a weighted linear combination method for the caching mechanism to store frequently accessed data based on zone ID, stability, and priority. The performance of QWLCPM is evaluated through simulations and compared with other forwarding and caching algorithms. QWLCPM’s efficacy stems from its holistic decision-making process, incorporating spatial and temporal elements for efficient cache management. Zone-based caching, showcased in different scenarios, enhances content delivery by utilizing stable nodes. QWLCPM’s proximity considerations significantly improve cache hits, reduce delay, and optimize hop count, especially in scenarios with sparse traffic. Additionally, its priority-based caching mechanism enhances hit ratios and content diversity, emphasizing QWLCPM’s substantial network-improvement potential in vehicular environments. These findings suggest that QWLCPM has the potential to greatly enhance QoS in NDVSNs and serve as a promising solution for future vehicular sensor networks. Future research could focus on refining the details of its implementation, scalability in larger networks, and conducting real-world trials to validate its performance under dynamic conditions.
“…DCCMS introduces an innovative approach to a cache-management technique that prioritizes content based on popularity and social interactions among nodes. It incorporates a master node concept for hierarchical collaboration and content distribution, focusing on maximizing the use of cache resources and minimizing content delivery latency [17]. LFU is a caching algorithm that removes the least frequently used items from the cache first.…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, a dynamic cooperative cache management scheme [17] was suggested, relying on popular and social data and involving a master node that operates hierarchically with nearby nodes to retain frequently accessed contents. However, in this scheme, the master node may experience a bottleneck during high network activity.…”
The named data vehicular sensor network (NDVSN) has become an increasingly important area of research because of the increasing demand for data transmission in vehicular networks. In such networks, ensuring the quality of service (QoS) of data transmission is essential. The NDVSN is a mobile ad hoc network that uses vehicles equipped with sensors to collect and disseminate data. QoS is critical in vehicular networks, as the data transmission must be reliable, efficient, and timely to support various applications. This paper proposes a QoS-aware forwarding and caching algorithm for NDVSNs, called QWLCPM (QoS-aware Forwarding and Caching using Weighted Linear Combination and Proximity Method). QWLCPM utilizes a weighted linear combination and proximity method to determine stable nodes and the best next-hop forwarding path based on various metrics, including priority, signal strength, vehicle speed, global positioning system data, and vehicle ID. Additionally, it incorporates a weighted linear combination method for the caching mechanism to store frequently accessed data based on zone ID, stability, and priority. The performance of QWLCPM is evaluated through simulations and compared with other forwarding and caching algorithms. QWLCPM’s efficacy stems from its holistic decision-making process, incorporating spatial and temporal elements for efficient cache management. Zone-based caching, showcased in different scenarios, enhances content delivery by utilizing stable nodes. QWLCPM’s proximity considerations significantly improve cache hits, reduce delay, and optimize hop count, especially in scenarios with sparse traffic. Additionally, its priority-based caching mechanism enhances hit ratios and content diversity, emphasizing QWLCPM’s substantial network-improvement potential in vehicular environments. These findings suggest that QWLCPM has the potential to greatly enhance QoS in NDVSNs and serve as a promising solution for future vehicular sensor networks. Future research could focus on refining the details of its implementation, scalability in larger networks, and conducting real-world trials to validate its performance under dynamic conditions.
“…These stream applications exhibit high data parallelism, computational intensity, and data locality characteristics. 1,2 Compared to traditional desktop applications, stream applications perform intensive arithmetic operations on each piece of data retrieved from internal memory. Most computations in stream applications can be parallelized at the data, thread, and task levels.…”
This paper proposes a novel method for managing cache consistency in multi-core systems when executing stream applications. The method involves arranging a mark cache for private data caches, which includes an optional integrality descriptor for shared reading and writing data states and shared data manipulation positions. The integrality descriptor identifies the current mode of operation for shared data in the private data cache. Additionally, the method utilizes a two-dimensional array register, referred to as the shared data manipulation position, with width N and depth M, where N distinguishes between different cache blocks and locking territories, while M corresponds to the number of cache blocks. This enables the identification of the cache capable or block corresponding to shared data during read and write operations. The proposed method offers simplicity, ease of operation, low hardware implementation cost, good extensibility, and strong configurability, ultimately improving system effectiveness.
“…This integration enables information gathering, input, storage, processing, and output on a single chip. 1,2 Modern embedded systems, including mobile phones and game consoles, place high demands on multimedia processor performance, particularly for graphics, images, and videos. As a result, Graphics Processing Units (GPUs) are often integrated into SoC chips.…”
Section: Introductionmentioning
confidence: 99%
“…CPU access requests are typically latency-sensitive, requiring quick service, while GPU access requests are bandwidth-sensitive, necessitating high-bandwidth service to ensure real-time image processing. 1,2 Consequently, the shared utilization mode of on-chip cache has a certain impact on the performance of both CPUs and GPUs, as it becomes challenging to meet the low latency demands of CPUs and the high bandwidth requirements of GPUs simultaneously. As the integration of CPUs and GPUs on SoC chips continues to increase, the issue of memory access contention between the two processing units becomes a pressing technical problem that needs urgent resolution.…”
This research paper presents a novel on-chip cache procedure and device that effectively handles access requests from both CPUs and GPUs. The proposed procedure involves classification caching based on the access request type, arbitrating different types of access requests for caching, and optimizing access time for CPU requests through cache while bypassing cache for GPU requests. The device includes CPU and GPU request queues, a moderator, and cache performance elements. By considering the distinct access characteristics of CPUs and GPUs simultaneously, this approach offers high performance, simple hardware implementation, and minimal cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.