BackgroundThe recalcitrant cell walls of microalgae may limit their digestibility for bioenergy production. Considering that cellulose contributes to the cell wall recalcitrance of the microalgae Chlorella vulgaris, this study investigated bioaugmentation with a cellulolytic and hydrogenogenic bacterium, Clostridium thermocellum, at different inoculum ratios as a possible method to improve CH4 and H2 production of microalgae.ResultsMethane production was found to increase by 17?~?24% with the addition of C. thermocellum, as a result of enhanced cell disruption and excess hydrogen production. Furthermore, addition of C. thermocellum enhanced the bacterial diversity and quantities, leading to higher fermentation efficiency. A two-step process of addition of C. thermocellum first and methanogenic sludge subsequently could recover both hydrogen and methane, with a 9.4% increase in bioenergy yield, when compared with the one-step process of simultaneous addition of C. thermocellum and methanogenic sludge. The fluorescence peaks of excitation-emission matrix spectra associated with chlorophyll can serve as biomarkers for algal cell degradation.ConclusionsBioaugmentation with C. thermocellum improved the degradation of C. vulgaris biomass, producing higher levels of methane and hydrogen. The two-step process, with methanogenic inoculum added after the hydrogen production reached saturation, was found to be an energy-efficiency method for hydrogen and methane production.
Retrieving unlabeled videos by textual queries, known as Ad-hoc Video Search (AVS), is a core theme in multimedia data management and retrieval. The success of AVS counts on cross-modal representation learning that encodes both query sentences and videos into common spaces for semantic similarity computation. Inspired by the initial success of previously few works in combining multiple sentence encoders, this paper takes a step forward by developing a new and general method for effectively exploiting diverse sentence encoders. The novelty of the proposed method, which we term Sentence Encoder Assembly (SEA), is twofold. First, different from prior art that use only a single common space, SEA supports text-video matching in multiple encoder-specific common spaces. Such a property prevents the matching from being dominated by a specific encoder that produces an encoding vector much longer than other encoders. Second, in order to explore complementarities among the individual common spaces, we propose multi-space multi-loss learning. As extensive experiments on four benchmarks (MSR-VTT, TRECVID AVS 2016-2019, TGIF and MSVD) show, SEA surpasses the state-of-the-art. In addition, SEA is extremely ease to implement. All this makes SEA an appealing solution for AVS and promising for continuously advancing the task by harvesting new sentence encoders.
As reported by respected evaluation campaigns focusing both on automated and interactive video search approaches, deep learning started to dominate the video retrieval area. However, the results are still not satisfactory for many types of search tasks focusing on high recall. To report on this challenging problem, we present two orthogonal task-based performance studies centered around the state-of-the-art W2VV++ query representation learning model for video retrieval. First, an ablation study is presented to investigate which components of the model are effective in two types of benchmark tasks focusing on high recall. Second, interactive search scenarios from the Video Browser Showdown are analyzed for two winning prototype systems implementing a selected variant of the model and providing additional querying and visualization components. The analysis of collected logs demonstrates that even with the state-of-the-art text search video retrieval model, it is still auspicious to integrate users into the search process for task types, where high recall is essential. CCS CONCEPTS • Information systems → Video search.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.