Abstract-Next-generation WLANs will support the use of wider channels, which is known as channel bonding, to achieve higher throughput. However, because both the channel center frequency and the channel width are autonomously selected by each WLAN, the use of wider channels may also increase the competition with other WLANs operating in the same area for the available channel resources. In this paper, we analyse the interactions between a group of neighboring WLANs that use channel bonding and evaluate the impact of those interactions on the achievable throughput. A Continuous Time Markov Network (CTMN) model that is able to capture the coupled dynamics of a group of overlapping WLANs is introduced and validated. The results show that the use of channel bonding can provide significant performance gains even in scenarios with a high density of WLANs, though it may also cause unfair situations in which some WLANs receive most of the transmission opportunities while others starve.
We provide the first rigorous analysis of proportional fairness in 802.11 WLANs. This analysis corrects prior approximate studies. We show that there exists a unique proportional fair rate allocation and completely characterise the allocation in terms of a new air-time quantity, the total air-time.
The scientific literature peer review workflow is under strain because of the constant growth of submission volume. One response to this is to make initial screening of submissions less time intensive. Reducing screening and review time would save millions of working hours and potentially boost academic productivity. Many platforms have already started to use automated screening tools, to prevent plagiarism and failure to respect format requirements. Some tools even attempt to flag the quality of a study or summarise its content, to reduce reviewers’ load. The recent advances in artificial intelligence (AI) create the potential for (semi) automated peer review systems, where potentially low-quality or controversial studies could be flagged, and reviewer-document matching could be performed in an automated manner. However, there are ethical concerns, which arise from such approaches, particularly associated with bias and the extent to which AI systems may replicate bias. Our main goal in this study is to discuss the potential, pitfalls, and uncertainties of the use of AI to approximate or assist human decisions in the quality assurance and peer-review process associated with research outputs. We design an AI tool and train it with 3300 papers from three conferences, together with their reviews evaluations. We then test the ability of the AI in predicting the review score of a new, unobserved manuscript, only using its textual content. We show that such techniques can reveal correlations between the decision process and other quality proxy measures, uncovering potential biases of the review process. Finally, we discuss the opportunities, but also the potential unintended consequences of these techniques in terms of algorithmic bias and ethical concerns.
Dynamic Channel Bonding (DCB) allows for the dynamic selection and use of multiple contiguous basic channels in Wireless Local Area Networks (WLANs). A WLAN operating under DCB can enjoy a larger bandwidth, when available, and therefore achieve a higher throughput. However, the use of larger bandwidths also increases the contention with adjacent WLANs, which can result in longer delays in accessing the channel and consequently, a lower throughput. In this paper, a scenario consisting of multiple WLANs using DCB and operating within carrier-sensing range of one another is considered. An analytical framework for evaluating the performance of such networks is presented. The analysis is carried out using a Markov chain model that characterizes the interactions between adjacent WLANs with overlapping channels. An algorithm is proposed for systematically constructing the Markov chain corresponding to any given scenario. The analytical model is then used to highlight and explain the key properties that differentiate DCB networks of WLANs from those operating on a single shared channel. Furthermore, the analysis is applied to networks of IEEE 802.11ac WLANs operating under DCB-which do not fully comply with some of the simplifying assumptions in our analysis-to show that the analytical model can give accurate results in more realistic scenarios.
There is growing evidence that search engines produce results that are socially biased, reinforcing a view of the world that aligns with prevalent social stereotypes. One means to promote greater transparency of search algorithms -which are typically complex and proprietary -is to raise user awareness of biased result sets. However, to date, little is known concerning how users perceive bias in search results, and the degree to which their perceptions differ and/or might be predicted based on user attributes. One particular area of search that has recently gained attention, and forms the focus of this study, is image retrieval and gender bias. We conduct a controlled experiment via crowdsourcing using participants recruited from three countries to measure the extent to which workers perceive a given image results set to be subjective or objective. Demographic information about the workers, along with measures of sexism, are gathered and analysed to investigate whether (gender) biases in the image search results can be detected. Amongst other findings, the results confirm that sexist people are less likely to detect and report gender biases in image search results.
Crowdsourcing is a popular technique to collect large amounts of human-generated labels, such as relevance judgments used to create information retrieval (IR) evaluation collections. Previous research has shown how collecting high quality labels from a crowdsourcing platform can be challenging. Existing quality assurance techniques focus on answer aggregation or on the use of gold questions where ground-truth data allows to check for the quality of the responses.In this paper, we present qualitative and quantitative results, revealing how different crowd workers adopt different work strategies to complete relevance judgment tasks efficiently and their consequent impact on quality. We delve into the techniques and tools that highly experienced crowd workers use to be more efficient in completing crowdsourcing micro-tasks. To this end, we use both qualitative results from worker interviews and surveys, as well as the results of a data-driven study of behavioral log data (i.e., clicks, keystrokes and keyboard shortcuts) collected from crowd workers performing relevance judgment tasks. Our results highlight the presence of frequently used shortcut patterns that can speed-up task completion, thus increasing the hourly wage of efficient workers. We observe how crowd work experiences result in different types of working strategies, productivity levels, quality and diversity of the crowdsourced judgments.
Crowdsourcing has become a standard methodology to collect manually annotated data such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or FigureEight, crowd workers select tasks to work on based on different dimensions such as task reward and requester reputation. Requesters then receive the judgments of workers who self-selected into the tasks and completed them successfully. Several crowd workers, however, preview tasks, begin working on them, reaching varying stages of task completion without finally submitting their work. Such behavior results in unrewarded effort which remains invisible to requesters. In this paper, we conduct an investigation of the phenomenon of task abandonment, the act of workers previewing or beginning a task and deciding not to complete it. We follow a three-fold methodology which includes 1) investigating the prevalence and causes of task abandonment by means of a survey over different crowdsourcing platforms, 2) data-driven analysis of logs collected during a large-scale relevance judgment experiment, and 3) controlled experiments measuring the effect of different dimensions on abandonment. Our results show that task abandonment is a widely spread phenomenon. Apart from accounting for a considerable amount of wasted human effort, this bears important implications on the hourly wages of workers as they are not rewarded for tasks that they do not complete. We also show how task abandonment may have strong implications on the use of collected data (for example, on the evaluation of Information Retrieval systems).
Knowledge bases are becoming a key asset leveraged for various types of applications on the Web, from search engines presenting 'entity cards' as the result of a query, to the use of structured data of knowledge bases to empower virtual personal assistants. Wikidata is an open general-interest knowledge base that is collaboratively developed and maintained by a community of thousands of volunteers. One of the major challenges faced in such a crowdsourcing project is to attain a high level of editor engagement. In order to intervene and encourage editors to be more committed to editing Wikidata, it is important to be able to predict at an early stage, whether an editor will or not become an engaged editor. In this paper, we investigate this problem and study the evolution that editors with different levels of engagement exhibit in their editing behaviour over time. We measure an editor's
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.