Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices. The distributed operation of FL gives rise to challenges that are not encountered in centralized machine learning, including the need to preserve the privacy of the local datasets, and the communication load due to the repeated exchange of updated models. These challenges are often tackled individually via techniques that induce some distortion on the updated models, e.g., local differential privacy (LDP) mechanisms and lossy compression. In this work we propose a method coined joint privacy enhancement and quantization (JoPEQ), which jointly implements lossy compression and privacy enhancement in FL settings. In particular, JoPEQ utilizes vector quantization based on random lattice, a universal compression technique whose byproduct distortion is statistically equivalent to additive noise. This distortion is leveraged to enhance privacy by augmenting the model updates with dedicated multivariate privacy preserving noise. We show that JoPEQ simultaneously quantizes data according to a required bit-rate while holding a desired privacy level, without notably affecting the utility of the learned model. This is shown via analytical LDP guarantees, distortion and convergence bounds derivation, and numerical studies. Finally, we empirically assert that JoPEQ demolishes common attacks known to exploit privacy leakage. publicly available [24]. LDP can be boosted by corrupting the model updates with privacy preserving noise (PPN) [25], via splitting/shuffling [26] or dimension selection [27], and by exploiting the noise induced when communicating over a shared wireless channel [28], [29]. Prior works also studied the trade-offs between user privacy, utility, and transmission rate; providing utility [24] and convergence [30] bounds. Several recent studies consider both challenges of compression and privacy in FL. The works [31], [32] quantize the local gradient with a differentially private 1-bit compressor.That is, the probability of each coordinate of the gradients to be encoded into one of two possible dictionary words is designed to satisfy the Gaussian mechanism; thus the communication burden is reduced while differential privacy (DP) is simultaneously guaranteed. However, these methods utilize fixed 1-bit quantizers, and cannot be configured into adaptable communication budget once available. In [33], the authors combine privacy and compression by converting the distortion induced by random lattice coding into a Gaussian noise which holds DP. To do so, they perturb the gradient by Gaussian noise prior to quantization, and the overall procedure then holds DP according to composition theorem of DP. The above works consider DP enhancements, providing users with privacy guarantees from untruthful adversaries, but fail to so for a potential untrusted FL third-party server; as can be guaranteed by LDP.The recent work [34] proposed a compression method that holds LDP. This scheme, referr...