Permeation grouting is widely used in grouting engineering because of its low grouting pressure and minor disturbance to the stratum. However, influenced by the complex properties of sand layer and slurry, an accurate prediction of the groutability of the sand layer remains to be a hard work. In this paper, the permeability of sand layer is studied based on a self-designed permeation grouting test device, which considers the different sand particle size, relative density of sand layer, slurry water-cement ratio, and clay content. The influencing factors of sand layer groutability are analyzed, and the different parameters that affect the grouting of sand layer are evaluated, thus proposing a new approach to predict the groutability of sand layer. Results show that the sand particle size and slurry water-cement ratio are positively related to the groutability of sand layer, and the relative density and clay content of sand layer are negatively correlated with the groutability of sand layer. The proposed alternative empirical formula to estimate the groutability of sand layer will help predict the groutability of sand layer with a higher degree of accuracy, which can provide a certain reference for engineering.
Due to the limited scale and quality of video-text training corpus, most visionlanguage foundation models employ image-text datasets for pretraining and primarily focus on modeling visually semantic representations while disregarding temporal semantic representations and correlations. To address this issue, we propose COSA, a COncatenated SAmple pretrained vision-language foundation model. COSA jointly models visual contents and event-level temporal cues using only image-text corpora. We achieve this by sequentially concatenating multiple image-text pairs as inputs for pretraining. This transformation effectively converts existing image-text corpora into a pseudo long-form video-paragraph corpus, enabling richer scene transformations and explicit event-description correspondence. Extensive experiments demonstrate that COSA consistently improves performance across a broad range of downstream tasks, including long-form/short-form videotext tasks and image-text tasks such as retrieval, captioning, and question answering. Notably, COSA achieves state-of-the-art results on various competitive benchmarks. Code and model are released at https://github.com/TXH-mercury/COSA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.