Access to large volumes of so called whole-slide images, high-resolution scans of complete pathological slides, has become a cornerstone of development of novel artificial intelligence methods in digital pathology, but has broader impact for medical research and education/training. However, a methodology based on risk analysis for sharing such imaging data and applying the principle “as open as possible and as closed as necessary” is still lacking. In this article we develop a model for privacy risk analysis for whole-slide images, which focuses primarily on identity disclosure attacks, as these are the most important from a regulatory perspective. We develop a mathematical model for risk assessment and design a taxonomy of whole-slide images with respect to privacy risks. Based on these risk assessment model and the taxonomy, we design a series of experiments to demonstrate the risks on real-world imaging data. Finally, we develop guidelines for risk assessment and recommendations for data sharing based on identified risks to promote low-risk sharing of the data.