“…Nevertheless, should a large number of field datasets, e.g., 4D seismic (Lorentzen et al, 2019;Luo et al, 2017;Soares et al, 2020), be assimilated into a big reservoir model, then it could be computationally challenging to construct a big tapering matrix for the update of each reservoir model. In this regard, we expect that a few strategies can be adopted to mitigate the consumption of computer memory, which include: (1) sparse model/data representation (Canchumuni et al, 2019;Lorentzen et al, 2019;Luo et al, 2017;Soares et al, 2020) to reduce the size(s) of reservoir model and/or observation data; (2) projection of observation data onto the ensemble subspace and then using the projected data as the effective observations (Luo et al, 2019;Luo and Cruz, 2022); (3) local analysis in which each update focuses on a small group of model variables and observation data points (Chen and Oliver, 2017;Soares et al, 2021). In future work, we will test some of these strategies in relevant data assimilation problems.…”