Biodegradation of synthetic wastewater containing phenol by upflow anaerobic packed bed reactor (UAPB) was studied in this work. The reactor was operated at a hydraulic retention time (HRT) of 24 h and under mesophilic (30±1ºC) conditions. The startup operation was conducted for 150 days; split into 4 phases. The phenol concentration was stepwise increased. The concentration of phenol in phases 1, 2, 3 and 4 were 100, 400, 700 and 1000 mg/l, respectively. In phase 1, the reactor reached steady state conditions on the 8th day with a phenol removal efficiency and biogas production rate of 96.8% and 1.42 l/d, respectively. For an increase of the initial phenol concentration in phase 2, a slight decrease in phenol removal efficiency was observed. Similar trends were observed in phases 3 and 4 of startup. Due to the high phenol concentration a sudden decrease in removal efficiency and biogas production was observed. The surviving microorganisms were gradually adapted and acclimated to high phenol concentrations. In phases 3 and 4, the phenol removal efficiency at steady state conditions were 98.4 and 98%, respectively. The maximum biogas production was observed at day 130 with a value of 3.57 l/d that corresponds to phenol concentration of 1000 mg/l
State-of-the-art pretrained NLP models contain a hundred million to trillion parameters. Adapters provide a parameter-efficient alternative for the full finetuning in which we can only finetune lightweight neural network layers on top of pretrained weights. Adapter layers are initialized randomly. However, existing work uses the same adapter architecture-i.e., the same adapter layer on top of each layer of the pretrained model-for every dataset, regardless of the properties of the dataset or the amount of available training data. In this work, we introduce adaptable adapters that contain (1) learning different activation functions for different layers and different input data, and (2) a learnable switch to select and only use the beneficial adapter layers. We show that adaptable adapters achieve on-par performances with the standard adapter architecture while using a considerably smaller number of adapter layers. In addition, we show that the selected adapter architecture by adaptable adapters transfers well across different data settings and similar tasks. We propose to use adaptable adapters for designing efficient and effective adapter architectures. The resulting adapters (a) contain about 50% of the learning parameters of the standard adapter and are therefore more efficient at training and inference, and require less storage space, and (b) achieve considerably higher performances in low-data settings. 1 * The work has been mostly carried out during the employment at the UKP Lab, TU Darmstadt.
Neural abstractive summarization models are prone to generate summaries which are factually inconsistent with their source documents. Previous work has introduced the task of recognizing such factual inconsistency as a downstream application of natural language inference (NLI). However, state-of-the-art NLI models perform poorly in this context due to their inability to generalize to the target task. In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples. We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries, introducing varying types of factual inconsistencies. Unlike previously introduced document-level NLI datasets, our generated dataset contains examples that are diverse and inconsistent yet plausible. We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.