In this paper we are using Devanagari script OCR for recognition. The handwritten data set is created by us and for printed characters we have used ISM font. Here we are using gradient and curvature based feature extraction method.
Image Quality Assessment plays an important role in various image processing applications. It is still an active area of research. A great deal of effort has been made in recent years to develop objective image quality metrics that correlate well with perceived human quality measurement or subjective methods. Image quality assessment means estimating the quality of an image and it is used for many image processing applications. Image quality can be measured in two ways, subjective and objective method. In Subjective image quality assessment the evaluation of quality by humans is obtained by mean opinion score (MOS) method where in objective evaluation of quality is done by algorithms. It concerned with how image is perceived by a viewer and gives his or her opinion on a particular image and judge quality of the multimedia content. The human eyes extract structural information from the viewing field, so the human visual system is highly adapted for this purpose.
In the virtual and widely distributed network, the process of handover sensitive data from the distributor to the trusted third parties always occurs regularly in this modern world. It needs to safeguard the security and durability of service based on the demand of users A data distributor has given sensitive data to a set of supposedly trusted agents (third parties). Some of the data are leaked and found in an unauthorized place (e.g., on the web or somebody's laptop). The distributor must assess the likelihood that the leaked data came from one or more agents, as opposed to having been independently gathered by other means. We propose data allocation strategies (across the agents) that improve the probability of identifying leakages. These methods do not rely on alterations of the released data (e.g., watermarks). In some cases, we can also inject "realistic but fake" data records to further improve our chances of detecting leakage and identifying the guilty party. The idea of modifying the data itself to detect the leakage is not a new approach. Generally, the sensitive data are leaked by the agents, and the specific agent is responsible for the leaked data should always be detected at an early stage. Thus, the detection of data from the distributor to agents is mandatory. This project presents a data leakage detection system using various allocation strategies and which assess the likelihood that the leaked data came from one or more agents. For secure transactions, allowing only authorized users to access sensitive data through access control policies shall prevent data leakage by sharing information only with trusted parties and also the data should be detected from leaking by means of adding fake record`s in the data set and which improves probability of identifying leakages in the system. Then, finally it is decided to implement this mechanism on a cloud server..
General Termsdata allocation strategies, leakage model, data privacy
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.