Long-term changes of water temperature, DO, pH, TN, NH 4 -N, TP, and COD Mn were examined at eight sampling stations along the Liujiang River. Water quality parameters showed considerable spatial and temporal variability. Annual averages of those parameters were 22.1ºC, 7.8 mg/L, 7.58, 1.35 mg/L, 0.27 mg/L, 0.06 mg/L, and 1.7 mg/L, respectively. An increasing trend for TN/TP and a decreasing trend for pH were observed in all parts of the Liujiang. Pollution levels were generally higher in the lower Liujiang than in the upper and middle parts of the river due to the impact of urban sewage. All indicators reached level III water quality standards except TN, which suggests that the control of nitrogen emissions should be strengthened. Relatively high N/P ratios in the Liujiang contribute to a potential for phosphorus limitation of phytoplankton. The average concentration of Chlorophyll-a was 1.2 μg/L in 2014. The TLI index indicated that the eutrophication state of the Liujiang was mesotrophic, while the downstream water was polluted due to its nutrient inputs from agriculture and urban sources. The water quality of the river keeps well by comparison to other major rivers of the world, which provided the basis for urban development and river protection in Liuzhou City.
Recent work for extracting relations from texts has achieved excellent performance. However, most existing methods pay less attention to the efficiency, making it still challenging to quickly extract relations from massive or streaming text data in realistic scenarios. The main efficiency bottleneck is that these methods use a Transformerbased pre-trained language model for encoding, which heavily affects the training speed and inference speed. To address this issue, we propose a fast relation extraction model (FastRE) based on convolutional encoder and improved cascade binary tagging framework. Compared to previous work, FastRE employs several innovations to improve efficiency while also keeping promising performance. Concretely, FastRE adopts a novel convolutional encoder architecture combined with dilated convolution, gated unit and residual connection, which significantly reduces the computation cost of training and inference, while maintaining the satisfactory performance. Moreover, to improve the cascade binary tagging framework, Fas-tRE first introduces a type-relation mapping mechanism to accelerate tagging efficiency and alleviate relation redundancy, and then utilizes a positiondependent adaptive thresholding strategy to obtain higher tagging accuracy and better model generalization. Experimental results demonstrate that Fas-tRE is well balanced between efficiency and performance, and achieves 3-10× training speed, 7-15× inference speed faster, and 1/100 parameters compared to the state-of-the-art models, while the performance is still competitive. Our code is available at https://github.com/seukgcode/FastRE.
Recent work for continual relation learning has achieved remarkable progress. However, most existing methods only focus on tackling catastrophic forgetting to improve performance in the existing setup, while continually learning relations in the real-world must overcome many other challenges. One is that the data possibly comes in an online streaming fashion with data distributions gradually changing and without distinct task boundaries. Another is that noisy labels are inevitable in real-world, as relation samples may be contaminated by label inconsistencies or labeled with distant supervision. In this work, therefore, we propose a novel continual relation learning framework that simultaneously addresses both online and noisy relation learning challenges. Our framework contains three key modules: (i) a sample separated online purifying module that divides the online data stream into clean and noisy samples, (ii) a self-supervised online learning module that circumvents inferior training signals caused by noisy data, and (iii) a semi-supervised offline finetuning module that ensures the participation of both clean and noisy samples. Experimental results on FewRel, TACRED and NYT-H with real-world noise demonstrate that our framework greatly outperforms the combinations of the state-of-the-art online continual learning and noisy label learning methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.