We describe novel composite nanoparticles consisting of a gold-silver nanocage core and a mesoporous silica shell functionalized with the photodynamic sensitizer Yb-2,4-dimethoxyhematoporphyrin (Yb-HP). In addition to the long-wavelength plasmon resonance near 750-800 nm, the composite particles exhibited a 400-nm absorbance peak and two fluorescence peaks, near 580 and 630 nm, corresponding to bound Yb-HP. The fabricated nanocomposites generated singlet oxygen under 630-nm excitation and produced heat under laser irradiation at the plasmon resonance wavelength (750-800 nm). In particular, we observed enhanced killing of HeLa cells incubated with nanocomposites and irradiated by 630-nm light. Furthermore, an additional advantage of fabricated conjugates was an IR-luminescence band (900-1060 nm), originating from Yb(3+) ions of bound Yb-HP and located in the long-wavelength part of the tissue transparency window. This modality was used to control the accumulation and biodistribution of composite particles in mice bearing Ehrlich carcinoma tumors in a comparative study with intravenously injected free Yb-HP molecules. Thus, these multifunctional nanocomposites seem an attractive theranostic platform for simultaneous IR-luminescence diagnostic and photodynamic therapy owing to Yb-HP and for plasmonic photothermal therapy owing to Au-Ag nanocages.
We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy (hosted at opacus.ai). Opacus is designed for simplicity, flexibility, and speed. It provides a simple and user-friendly API, and enables machine learning practitioners to make a training pipeline private by adding as little as two lines to their code. It supports a wide variety of layers, including multi-head attention, convolution, LSTM, and embedding, right out of the box, and it also provides the means for supporting other user-defined layers. Opacus computes batched per-sample gradients, providing better efficiency compared to the traditional "micro batch" approach. In this paper we present Opacus, detail the principles that drove its implementation and unique features, and compare its performance against other frameworks for differential privacy in ML.
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP) with respect to the labels of the training examples. We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks. While recent work by Ghazi et al. proposed Label DP schemes based on a randomized response mechanism, we argue that additive Laplace noise coupled with Bayesian inference (ALIBI) is a better fit for typical ML tasks. Moreover, we show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework that builds on recent advances in semi-supervised learning. We complement theoretical analysis of our algorithms' privacy guarantees with empirical evaluation of their memorization properties. Our evaluation suggests that comparing different algorithms according to their provable DP guarantees can be misleading and favor a less private algorithm with a tighter analysis. Code for implementation of algorithms and memorization attacks is available from https://github.com/facebookresearch/label_dp_antipodes under MIT license.
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model. It has been recently shown that simple heuristics can reconstruct data samples from language models, making this threat scenario an important aspect of model release. Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget (ε ≥ 8) which does not translate to meaningful guarantees. In this paper we show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature. In particular, we show that larger privacy budgets do not protect against membership inference, but can still protect extraction of rare secrets. We show experimentally that our guarantees hold against various language models, including GPT-2 finetuned on Wikitext-103.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.