Suppose a number of hospitals in a geographic area want to learn how their own heart-surgery unit is doing compared with the others in terms of mortality rates, subsequent complications, or any other quality metric. Similarly, a number of small businesses might want to use their recent point-of-sales data to cooperatively forecast future demand and thus make more informed decisions about inventory, capacity, employment, etc. These are simple examples of cooperative benchmarking and (respectively) forecasting that would benefit all participants as well as the public at large, as they would make it possible for participants to avail themselves of more precise and reliable data collected from many sources, to assess their own local performance in comparison to global trends, and to avoid many of the inefficiencies that currently arise because of having less information available for their decision-making. And yet, in spite of all these advantages, cooperative benchmarking and forecasting typically do not take place, because of the participants' unwillingness to share their information with others. Their reluctance to share is quite rational, and is due to fears of embarrassment, lawsuits, weakening their negotiating position (e.g., in case of over-capacity), revealing corporate performance and strategies, etc. The development and deployment of private benchmarking and forecasting technologies would allow such collaborations to take place without revealing any participant's data to the others, reaping the benefits of collaboration while avoiding the drawbacks. Moreover, this kind of technology would empower smaller organizations who could then cooperatively base their decisions on a much broader information base, in a way that is today restricted to only the largest corporations. This paper is a step towards this goal, as it gives protocols for forecasting and benchmarking that reveal to the participants the desired answers yet do not reveal to any participant any other participant's private data. We consider several forecasting methods, including linear regression and time series techniques such as moving average and exponential smoothing. One of the novel parts of this work, that further distinguishes it from previous work in secure multi-party computation, is that it involves floating point arithmetic, in particular it provides protocols to securely and efficiently perform division.
Absrracf-With the growing threat of abuse of network resources, it becomes increasingly important to be able to detect malformed packets on a network and estimate the damage they ean cause. Carefully constructed, certain types of packets can cause a victim host to crash while other packets may be sent only to gather necessary information about hosts and networks and can be viewed as a prelude to attack. In this paper, we collect and analyze aU of the IP and TCP packets seen on a network that either violate existing standards or should not appear in modem intemets. Our goal is to determine what these suspiaous packets mean and evaluatewhat proportion of such packets can cause actual damage. Thus, we divide u n d packets obtained dnring our experiments into several categories depending on the severity of their consequences, ineluding indireet consequences as a result of information gathering, and show the d t s . The traces analyzed were gathered at Ohio University's main Internet link, providing a massive amount of statistical data
Purpose: To compare the aspect of the reproduction accuracy in studied methods of determination of the (CR) of jaws using the digital research methods. The methods used were bilateral manipulation by P.E. Dawson, frontal deprogrammer, leaf gauge, and intraoral device for recording of Gothic arch angle. Methods: To determine the reproduction accuracy of the centric relation of jaws, we examined 5 patients with intact dentition in a prosthetic dentistry clinic (first class in Angle’s system). For each method, 20 registrations of the centric jaw relation were carried out by one operator. The breaks between definitions were 30 minutes. A total of 400 CR recording operations were carried out (400 records of CR). In order to study the reproducibility of CR determination methods, 200 recorded mandible positions were analyzed by means of an analog-to-digital method (a macro kit Canon 650D, Canon 60 mm macro IS USM f2.8, Canon macro ring MR-14 EX and the computer program Adobe Photoshop) to assess the first occlusal contact obtained in the CR of jaws, while the other 200 were analyzed by means of a digital method (the computer program Avantis for 3D modeling, Prime as a laboratory 3D scanner (DOF), and Trios as an intraoral scanner (3Shape)) to assess the spatial position of the mandible in the CR. Statistical analysis was carried out using STATISTICA-10. In all statistical analysis procedures, the critical significance level p was assumed to be 0.05. Results: In the study of the data by means of the computer program Avanti 3D, the reproducibility of the mandible position in the CR reached 0.119 ± 0.012 mm for frontal deprogrammer, 0.225 ± 0.028, p ≤ 0.05 for bilateral manipulation by Dawson P.E., 0.207 ± 0.02, p ≤ 0.05 for leaf gauge, and 0.120 ± 0,013, p ≤ 0.05 using an intraoral device for recording the Gothic arch angle. The analog-to-digital method showed an identical tendency for reproduction of the mandible position. Conclusions: The digital analysis we made using the Avantis 3D program showed, with high confidence, that the maximum reproducibility of the CR position was reached by using our own design frontal deprogrammer and the device for recording Gothic arch angle.
When customers need to each be given portable access rights to a subset of documents from a large universe of n available documents, it is often the case that the space available for representing each customer's access rights is limited to much less than n, say it is no more than m bits. This is the case when, e.g., limited-capacity inexpensive cards are used to store the access rights to huge multimedia document databases. How does one represent subsets of a huge set of n elements, when only m bits are available and m is much smaller than n? We use an approach reminiscent of Bloom filters, by assigning to each document a subset of the m bits: If that document is in a customer's subset then we set the corresponding bits to 1 on the customer's card. This guarantees that each customer gets the documents he paid for, but it also gives him access to documents he did not pay for ("false positives"). We want to do so in a manner that minimizes the expected total false positives under various deterministic and probabilistic models: In the former model we assume k customers whose respective subsets are known a priori, whereas in the latter we assume (more realistically) that each document has a probability of being included in a customer's subset. We cannot use randomly assigned bits for each document (in the way Bloom filters do), rather we need to consider the a priori knowledge (deterministic or probabilistic) we are given in each model in order to better assign a subset of the m available bits to each of the n documents. We analyze and give efficient schemes for this problem.
Abstract. We present and analyze portable access control mechanisms for large data repositories, in that the customized access policies are stored on a portable device (e.g., a smart card). While there are significant privacy-preservation advantages to the use of smart cards anonymously created and bought in public places (stores, libraries, etc), a major difficulty is that, for huge data repositories and limited-capacity portable storage devices, it is not possible to represent any possible access configuration on the card. For a customer whose card is supposed to contain a subset S of documents, access to all of S must be allowed. In some situations a small enough number of "false positives" (which are accesses to non-S documents) is acceptable to the server, and the challenge then is to minimize the number of false positives implicit to any given card. We describe and analyze schemes for both unstructured and structured collections of documents. For these schemes, we give fast algorithms for efficiently using the limited space available on the card. In our model the customer does not know which documents correspond to false positives, the probability of a randomly chosen document being a false positive is small, and information about false positives bound to one card is useless for any other card even if both of them permit access to the same set of documents S.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.