Sampling methods for benthic meiofauna and macrofauna assessments on the northern Gulf of Mexico continental slope and deep sea were compared. For meiofauna, a core with an inner diameter of 5.1 cm is recommended for yielding an appropriate sample size. Meiofauna are concentrated in the uppermost 2 cm sediment layer, so the top 3 cm are sufficient to sample. Macrofauna penetrate deeper and the top 10 cm are sufficient. Smaller sieves capture more organisms so 45 μm for meiofauna, and 300 μm for macrofauna, is recommended. On average, 88% of meiofauna were extracted in the Ludox fraction compared to the total of both Ludox and the sediment pellet. Box corers and multiple corers were compared for estimating macrofauna and meiofauna metrics. Multicorers are recommended for quantitative assessments, but box corers are useful for qualitative studies that require capturing more diversity. Box cores underestimate macrofauna abundance by 2.9 times. While the larger box core captures more species resulting in higher diversity estimates, it is low relative to the 24 times larger area sampled. The multicorer preserves vertical distribution. Because meiofauna are sampled from subcores, there is little difference between the two devices for estimating meiofauna metrics. Replicate multicore samples (i.e., deployments) do not add substantially to our understanding of the variance of species richness or abundance, thus to describe the spatial footprint of macrofauna community structure, it is recommended that resources should be used to sample more stations over a larger area rather than multiple replicates at fewer stations.
PurposeThis study aims to predict popular contributors through text representations of user-generated content in open crowds.Design/methodology/approachThree text representation approaches – count vector, Tf-Idf vector, word embedding and supervised machine learning techniques – are used to generate popular contributor predictions.FindingsThe results of the experiments demonstrate that popular contributor predictions are considered successful. The F1 scores are all higher than the baseline model. Popular contributors in open crowds can be predicted through user-generated content.Research limitations/implicationsThis research presents brand new empirical evidence drawn from text representations of user-generated content that reveals why some contributors' ideas are more viral than others in open crowds.Practical implicationsThis research suggests that companies can learn from popular contributors in ways that help them improve customer agility and better satisfy customers' needs. In addition to boosting customer engagement and triggering discussion, popular contributors' ideas provide insights into the latest trends and customer preferences. The results of this study will benefit marketing strategy, new product development, customer agility and management of information systems.Originality/valueThe paper provides new empirical evidence for popular contributor prediction in an innovation crowd through text representation approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.