Single-version locking schedulerProving the single-version locking scheme correct is trivial, as the scheduler is a 2PL scheduler. Multi-version pessimistic (locking) schedulerThe multi-version pessimistic (locking) scheme is in fact a MV2PL scheduler. Holding a certify (commit) lock on a data item in MV2PL is exactly like having the NoMoreReadLocks bit set in the latest version of the data item in our implementation (see Section 4.2.1). Section 5.5.2 of [WV02] describes MV2PL in detail and proves it only admits 1SR multi-version histories. Multi-version optimistic schedulerLet us now prove that the multi-version optimistic scheduler only admits 1SR multi-version histories. We use the notation and theorems from Section 5.2 of [BHG87]. The multi-version optimistic scheduler behaves like a MVTO scheduler, with the changes described below.Let transaction Tx be a committed transaction with a Begin timestamp of TxBegin and an End timestamp of TxEnd.Property 1: Timestamps are assigned in a monotonically increasing order, and each transaction has a unique begin and end timestamp, such that TxBegin < TxEnd.Property 2: A given version is valid for the interval specified by the begin and end timestamps. There is a total order << of versions for a given datum, as determined by the timestamp order of the nonoverlapping version validity intervals.Property 3: The transaction Tx reads the latest committed version as of TxRead (where TxBegin <= TxRead < TxEnd) and validates (that is, repeats) the read of the latest committed version as of TxEnd. The transaction fails if the two reads return different versions.Property 4: Updates or deletes to a version V first check the visibility of V. Checking the visibility of V is equivalent to reading V. Therefore, a write is always preceded by a read: if transaction Tx writes Vnew, then transaction Tx has first read Vold, where Vold << Vnew. Moreover, there exists no version V such that Vold << V << Vnew, otherwise Tx would have never committed: it would have failed during the Active phase when changing the end timestamp of Vold (see Section 3.1, paragraph "Update version") 1 .1 Notice that all our concurrency control algorithms enforce a stronger property: they use the first-writer-wins rule to abort transactions that participate in a write-write conflict before it is determined whether the first writer will commit. The more relaxed property described here is sufficient to prove correctness.
Materialized views can provide massive improvements in query processing time, especially for aggregation queries over large tables. To realize this potential, the query optimizer must know how and when to exploit materialized views. This paper presents a fast and scalable algorithm for determining whether part or all of a query can be computed from materialized views and describes how it can be incorporated in transformation-based optimizers. The current version handles views composed of selections, joins and a final group-by. Optimization remains fully cost based, that is, a single "best" rewrite is not selected by heuristic rules but multiple rewrites are generated and the optimizer chooses the best alternative in the normal way. Experimental results based on an implementation in Microsoft SQL Server show outstanding performance and scalability. Optimization time increases slowly with the number of views but remains low even up to a thousand. KeywordsMaterialized views, view matching, query optimization.
Query processmg can be sped up by keeping frequently accessed users' views materlahzed However, the need to access base relations m response to queues can be avoided only If the materlahzed view ls adequately maintainedWe propose a method m which all database updates to base relations are first filtered to remove from consideration those that cannot possibly affect the view The condltlons given for the detection of updates of this type, called arrelevant updates, are necessary and sufficient and are mdependent of the database state For the remammg database updates, a dzfferentlal algonthm can be apphed to re-evaluate the view expression The algonthm proposed exploits the knowledge provided by both the view defimtlon expression and the database update operations
Abstract.A new file organisation called dynamic hashing is presented. The organisation is based on normal hashing, but the allocated storage space can easily be increased and decreased without reorganising the file, according to the number of records actually stored in the file. The expected storage utilisation is analysed and is shown to be approximately 69 ~o all the time. Algorithms for inserting and deleting a record are presented and analysed. Retrieval of a record is fast, requiring only one access to secondary storage. There are no overflow records. The proposed scheme necessitates maintenance of a relative!y small index structured as a forest of binary trees or slightly modified binary tries. The expected size of the index is analysed and a compact representation of the index is suggested.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.