“…Membership Inference Attacks (MIA) try to determine whether or not a target sample was used in training a target model (Shokri et al, 2017;Yeom et al, 2018). These attacks be seen as privacy risk analysis tools (Murakonda and Shokri, 2020;Nasr et al, 2021;Kandpal et al, 2022), which help reveal how much the model has memorized the individual samples in its training set, and what the risk of individual users is (Nasr et al, 2019;Long et al, 2017;Salem et al, 2018;Ye et al, 2021;Carlini et al, 2021a) A group of these attacks rely on behavior of shadow models (models trained on data similar to training, to mimic the target model) to determine the membership of given samples (Jayaraman et al, 2021;Shokri et al, 2017). In the shadow model training procedure the adversary trains a batch of models m 1 ,m 2 ,...,m k as shadow models, with data from the target user.…”