The rapid explosion in the use of wireless devices, combined with the simultaneous infrastructure expansion required to suport a massive number of users, has made ubiquitous access to data a reality. When a large number of users simultaneously accesses global data in a mobile computing environment, an efficient means to manage a large amount of concurrent transactions is required. Current multi‐database concurrency control schemes do not address the limited bandwidth and frequent disconnections associated with wireless networks. This article describes a new hierarchical concurrency control algorithm, v‐lock, that addresses the shortcomings of existing multi‐database concurrency control schemes. The algorithm uses global locking tables created with semantic informantion contained within a hierarchy to serialize global transactions, and remove global deadlocks.
Additionally, wireless transmission of data is subject to weak or frequent disconnection. With the increased amount of local memory available at the mobile unit, data replication can be used to provide local data availability, limiting the restrictions imposed by a wireless mobile environment. In a mobile, multi‐database environment, local autonomy restrictions prevent the use of a page‐or‐file‐based data replication scheme. This article describes a new data replication scheme to address to limited bandwidth and local autonomy restrictions. Consistency is maintained by using a parity‐based invalidation scheme of data cached at the mobile unit. Additionally, a simple prefetching scheme is used to further improve the effectiveness of the proposed scheme. Finally, simulated results of the concurrency control and replication algorithms are presented and discussed.
In a mobile environment, the result of queries often depends on the client's location. These queries are called location dependent queries (LDQ). Applying the concept of caching to LDQs provides a means for efficient processing when queries exhibit both semantic similarity and spatial locality. Existing LDQ caching schemes require database (DB) servers to provide validity regions (VR) for LDQ results, which introduces significant processing and/or storage overhead. As a result, DB servers may only provide the validity information conditionally or do not provide it at all. We propose a novel LDQ proxy scheme that can estimate the VR if DB servers do not provide such information. The simulation results show that the LDQ proxy reduces both the LDQ response time and the database workload.
The dataflow model of computation offers an attractive alternative to control‐flow in extracting parallelism from programs. The execution of a dataflow instruction is based on the availability of its operand(s); hence, the synchronization of parallel activities is implicit in the dataflow model. Instructions in dataflow model do not impose any constraints on sequencing except for the data dependencies in the program.
The elegant representation of concurrency in dataflow computation led to considerable interest in dataflow modes over the past three decades. These efforts have led to successively more elaborate architechtural implementations of the model. However, studies form past projects have revealed a number of inefficiencies in dataflow computing. Recent advances that may address these deficiencies have generated a renewed interest in dataflow. In this article we will survey the various issues and developments in dataflow computing.
The dataflow model of processing, in general, and recent direction to combine dataflow processing with controlflow processing, in particular, provide attractive alternatives to satisfy the computational demand of new applications, without experiencing the shortcomings of the traditional concurrent systems. This should motivate researchers to analyze the applicability of the familiar concepts within this new architectural framework-Scheduling and load balancing.Run-time overhead of detection and allocation of dynamic parallelism in a program can easily, offset the performance gain. However, the difficult task of accurate estimation of the run-time parallelism during the compile-time is a stumbling block to the static approach. As a compromise, we propose an allocation policy which detects dynamic parallelism for a selected group of program constructs during compile-time and allocates them to the estimated hardware resources in a staggered fashion. The proposed staggered scheme is simulated and its performance is compared against some other schemes proposed in the literature. It has been shown that the proposed scheme offers order of magnitude performance improvement over the cyclic distribution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.