“…However, even ML models trained with raw data can also indirectly reveal sensitive information [17,50,16,49], in particular, RNNs [58]. To protect ML models against such threats, under the state-ofthe-art DP guarantee [22,23], there exist some privacypreserving ML alternatives adopted in the literature, e.g., input [19,31,24,29,10,9], gradient [4,33,51,60,48], and objective perturbation [18].…”