Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.
It is usually difficult to correctly segment medical images with intensity inhomogeneity, which is of great significance in understanding of medical images. The local image intensity features play a vital role in accurately segmenting medical images with intensity inhomogeneity. Therefore, it is crucial to acquire the local intensity features for a deeper understanding of medical images. The main idea of this paper is to construct an efficient similarity-based level set model, which synthesizes the similarity theory, curve evolution and level set. Firstly, a local statistical function is modeled as different scales of Gaussian distributions to estimate bias fields, in which a real image can be approximately obtained for a more accurate medical image segmentation. Secondly, a new potential function is constructed to maintain the stability of the curve evolution, especially the signed distance profile in the neighborhood of the zero level set, which plays an important role in the correct segmentation. Thirdly, an adaptive condition criterion has been proposed to accelerate the convergence in the curve processing. Finally, the experiments on artificial and medical images and comparisons with the current well-known region-based models are discussed in details. Our extensive experimental results demonstrate that the proposed method can correctly segment medical images with intensity inhomogeneity in a few iterations and also is less sensitive to the initial contour.
The first Agriculture-Vision Challenge aims to encourage research in developing novel and effective algorithms for agricultural pattern recognition from aerial images, especially for the semantic segmentation task associated with our challenge dataset. Around 57 participating teams from various countries compete to achieve state-of-the-art in aerial agriculture semantic segmentation. The Agriculture-Vision Challenge Dataset was employed, which comprises of 21,061 aerial and multi-spectral farmland images. This paper provides a summary of notable methods and results in the challenge. Our submission server and leaderboard will continue to open for researchers that are interested in this challenge dataset and task; the link can be found here. * indicates joint first author. For more information on our database and other related efforts in Agriculture-Vision, please visit our CVPR 2020 workshop and challenge website https://www.agriculture-vision.com.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.