Abstract-Mobile ad-hoc networking involves peer-to-peer communication in a network with a dynamically changing topology. Achieving energy efficient communication in such a network is more challenging than in cellular networks since there is no centralized arbiter such as a base station that can administer power management. In this paper, we propose and evaluate a power control loop, similar to those commonly found in cellular CDMA networks, for ad-hoc wireless networks. We use a comprehensive simulation infrastructure consisting of group mobility, group communication and terrain blockage models. A major focus of research in ad-hoc wireless networking is to reduce energy consumption because the wireless devices are envisioned to have small batteries and be incapable of energy scavenging. We show that this power control loop reduces energy consumption per transmitted byte by 10 -20%. Furthermore, we show that it increases overall throughput by 15%.
The idea of “date” and “party” hubs has been influential in the study of protein–protein interaction networks. Date hubs display low co-expression with their partners, whilst party hubs have high co-expression. It was proposed that party hubs are local coordinators whereas date hubs are global connectors. Here, we show that the reported importance of date hubs to network connectivity can in fact be attributed to a tiny subset of them. Crucially, these few, extremely central, hubs do not display particularly low expression correlation, undermining the idea of a link between this quantity and hub function. The date/party distinction was originally motivated by an approximately bimodal distribution of hub co-expression; we show that this feature is not always robust to methodological changes. Additionally, topological properties of hubs do not in general correlate with co-expression. However, we find significant correlations between interaction centrality and the functional similarity of the interacting proteins. We suggest that thinking in terms of a date/party dichotomy for hubs in protein interaction networks is not meaningful, and it might be more useful to conceive of roles for protein-protein interactions rather than for individual proteins.
UK Engineering and Physical Sciences Research Council.
X10 is a modern object-oriented programming language designed for high performance, high productivity programming of parallel and multi-core computer systems. Compared to the lower-level thread-based concurrency model in the Java TM language, X10 has higher-level concurrency constructs such as async, atomic and finish built into the language to simplify creation, analysis and optimization of parallel programs. In this paper, we introduce a new algorithm for May-Happen-in-Parallel (MHP) analysis of X10 programs. The analysis algorithm is based on simple path traversals in the Program Structure Tree, and does not rely on pointer alias analysis of thread objects as in MHP analysis for Java programs. We introduce a more precise definition of the MHP relation than in past work by adding condition vectors that identify execution instances for which the MHP relation holds, instead of just returning a single true/false value for all pairs of executing instances. Further, MHP analysis is refined in our approach by using the observation that two statement instances which occur in atomic sections that execute at the same X10 place must have MHP = false. We expect that our MHP analysis algorithm will be applicable to any language that adopts the core concepts of places, async, finish, and atomic sections from the X10 programming model. We also believe that this approach offers the best of two worlds to programmers and parallel programming tools -higher-level abstractions of concurrency coupled with simple and efficient analysis algorithms.
Scalability and accuracy are well recognized challenges in deep extreme multi-label learning where the objective is to train architectures for automatically annotating a data point with the most relevant subset of labels from an extremely large label set. This paper develops the DeepXML framework that addresses these challenges by decomposing the deep extreme multi-label task into four simpler sub-tasks each of which can be trained accurately and efficiently. Choosing different components for the four sub-tasks allows Deep-XML to generate a family of algorithms with varying trade-offs between accuracy and scalability. In particular, DeepXML yields the Astec algorithm that could be 2-12% more accurate and 5-30× faster to train than leading deep extreme classifiers on publically available short text datasets. Astec could also efficiently train on Bing short text datasets containing up to 62 million labels while making predictions for billions of users and data points per day on commodity hardware. This allowed Astec to be deployed on the Bing search engine for a number of short text applications ranging from matching user queries to advertiser bid phrases to showing personalized ads where it yielded significant gains in click-through-rates, coverage, revenue and other online metrics over state-of-the-art techniques currently in production. DeepXML's code is available at https://github.com/Extreme-classification/deepxml.
BackgroundIt has been apparent in the last few years that small non coding RNAs (ncRNA) play a very significant role in biological regulation. Among these microRNAs (miRNAs), 22-23 nucleotide small regulatory RNAs, have been a major object of study as these have been found to be involved in some basic biological processes. So far about 706 miRNAs have been identified in humans alone. However, it is expected that there may be many more miRNAs encoded in the human genome. In this report, a "context-sensitive" Hidden Markov Model (CSHMM) to represent miRNA structures has been proposed and tested extensively. We also demonstrate how this model can be used in conjunction with filters as an ab initio method for miRNA identification.ResultsThe probabilities of the CSHMM model were estimated using known human miRNA sequences. A classifier for miRNAs based on the likelihood score of this "trained" CSHMM was evaluated by: (a) cross-validation estimates using known human sequences, (b) predictions on a dataset of known miRNAs, and (c) prediction on a dataset of non coding RNAs. The CSHMM is compared with two recently developed methods, miPred and CID-miRNA. The results suggest that the CSHMM performs better than these methods. In addition, the CSHMM was used in a pipeline that includes filters that check for the presence of EST matches and the presence of Drosha cutting sites. This pipeline was used to scan and identify potential miRNAs from the human chromosome 19. It was also used to identify novel miRNAs from small RNA sequences of human normal leukocytes obtained by the Deep sequencing (Solexa) methodology. A total of 49 and 308 novel miRNAs were predicted from chromosome 19 and from the small RNA sequences respectively.ConclusionThe results suggest that the CSHMM is likely to be a useful tool for miRNA discovery either for analysis of individual sequences or for genome scan. Our pipeline, consisting of a CSHMM and filters to reduce false positives shows promise as an approach for ab initio identification of novel miRNAs.
Noise is a stark reality in real life data. Especially in the domain of text analytics, it has a significant impact as data cleaning forms a very large part of the data processing cycle. Noisy unstructured text is common in informal settings such as on-line chat, SMS, email, newsgroups and blogs, automatically transcribed text from speech, and automatically recognized text from printed or handwritten material. Gigabytes of such data is being generated everyday on the Internet, in contact centers, and on mobile phones. Researchers have looked at various text mining issues such as pre-processing and cleaning noisy text, information extraction, rule learning, and classification for noisy text. This paper focuses on the issues faced by automatic text classifiers in analyzing noisy documents coming from various sources. The goal of this paper is to bring out and study the effect of different kinds of noise on automatic text classification. Does the nature of such text warrant moving beyond traditional text classification techniques? We present detailed experimental results with simulated noise on the Reuters-21578 and 20-newsgroups benchmark datasets. We present interesting results on real-life noisy datasets from various CRM domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.