On Labor Day weekend, the highway patrol sets up spot-checks at random points on the freeways with the intention of deterring a large fraction of motorists from driving incorrectly. We explore a very similar idea in the context of program checking to ascertain with minimal overhead that a program output is reasonably correct. Our model of spot-checking requires that the spot-checker must run asymptotically much faster than the combined length of the input and output. We then show that the spot-checking model can be applied to problems in a wide range of areas, including problems regarding graphs, sets, and algebra. In particular, we present spot-checkers for sorting, convex hull, element distinctness, set containment, set equality, total orders, and correctness of group and field operations. All of our spot-checkers are very simple to state and rely on testing that the input andÂor output have
In one proposed use of digital watermarks, the owner of a document D sells slightly different documents, D 1 , D 2 ,. .. to each buyer; if a buyer posts his/her document D i to the web, the owner can identify the source of the leak. More general attacks are however possible in which k buyers create some composite document D * ; the goal of the owner is to identify at least one of the conspirators. We show, for a reasonable model of digital watermarks, fundamental limits on their efficacy against collusive attacks. In particular, if the effective document length is n, then at most O(p n/ ln n) adversaries can defeat any watermarking scheme. Our attack is, in the theoretical model, oblivious to the watermarking scheme being used; in practice, it uses very little information about the watermarking scheme. Thus, using a proprietary system seems to give only a very weak defense.
On Labor Day Weekend, the highway patrol sets up spotchecka nt random points on the freeways with the intention of deterring a large fraction of motorists from driving incorrectly, WC explore a very similar idea in the context of program checking to ascertain with minimal overhead that a program output is reasonably correct. Our model of spotchecking requires that the spot-checker must run asymptotically much faster than the combined length of the input and output, We then show that the spot-checking model can be applied to problems in a wide range of areas, including problems regarding graphs, sets, and algebra. In particular, we present spot-checkers for sorting, element distinctness, set containment, set equality, total orders, and correctness of group operations. All of our spot-checkers are very simple to state and rely on testing that the input and/or output have certain simple properties that depend on very few bits.Our sorting spot-checker runs in O(logn) time to check the correctness of the output produced by a sorting algorithm on an input consisting of n numbers. We also show that there lo an O(1) spot-checker to check a program that determines whether a given relation is close to a total order. We present a technique for testing in almost linear time whether a given operation is close to an associative cancellative operation.In this extended abstract we show the checker under the assumption that the input operation is cancellative and leave the general case for the full version of the paper. In contrast, [RaS96] show that quadratic time is necessary and sufficient to test that a given cancellative operation is associative. This method yields a very efficient tester (over small domains) for all functions satisfying associative functional equations [Acz66]. We also extend this result to test in almost linear time whether the given operation is close to a group operation.
We show how to determine whether the edit distance between two given strings is small in sublinear time. Specifically, we present a test which, given two n-character strings A and B, runs in time o(n) and with high probability returns "CLOSE" if their edit distance is O(n α ), and "FAR" if their edit distance is Ω(n), where α is a fixed parameter less than 1. Our algorithm for testing the edit distance works by recursively subdividing the strings A and B into smaller substrings and looking for pairs of substrings in A, B with small edit distance. To do this, we query both strings at random places using a special technique for economizing on the samples which does not pick the samples independently and provides better query and overall complexity. As a result, our test runs in timeÕ nfor any fixed α < 1. Our algorithm thus provides a trade-off between accuracy and efficiency that is particularly useful when the input data is very large.We also show a lower bound of Ω(n α/2 ) on the query complexity of every algorithm that distinguishes pairs of strings with edit distance at most n α from those with edit distance at least n/6.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.