Modeling clouds is hard. As noted in great detail in Morrison, van Lier-Walqui, Fridlind, et al. (2020), the two biggest obstacles are (a) the sheer number of cloud and precipitation particles to predict and (b) the lack of the fundamental understanding of microphysical processes involved. With quintillions of droplets in a typical cloud, it is infeasible to predict the evolution of each droplet and its environment. In addition, there does not exist a fundamental governing equation or benchmark model that can help inform the modeling of microphysics in larger scale, unlike other subgrid physical processes such as radiation, turbulence, and convection. Clouds are thus only modeled from a macroscopic perspective with its parameterizations determined empirically. Aside from the more recently developed Lagrangian superdroplet method (e.g., Shima et al., 2009), the two most well-established microphysics scheme types are the bulk microphysics scheme and the bin microphysics scheme. Bin schemes keep track of the number of droplets of within discrete size or mass ranges (i.e., size-resolved or mass-resolved), allowing the particle size distribution (PSD) to evolve freely but is therefore computationally expensive (A. P. Khain et al., 2015). On the other hand, bulk schemes predict quantities proportional to moments of the PSD (i.e., moment-resolved), typically mass and number concentration (e.g.,