Abstract-We provide a counter example to a conjecture by Leslie Valiant. Most interestingly the counter example was found by introducing guessing numbers -a new graph theoretical concept. We show that solvability of information flow problems of a quite general type is closely related to problems concerning guessing numbers.We
The guessing number of a directed graph (digraph), equivalent to the entropy of that digraph, was introduced as a direct criterion on the solvability of a network coding instance. This paper makes two contributions on the guessing number. First, we introduce an undirected graph on all possible configurations of the digraph, referred to as the guessing graph, which encapsulates the essence of dependence amongst configurations. We prove that the guessing number of a digraph is equal to the logarithm of the independence number of its guessing graph. Therefore, network coding solvability is no more a problem on the operations made by each node, but is simplified into a problem on the messages that can transit through the network. By studying the guessing graph of a given digraph, and how to combine digraphs or alphabets, we are thus able to derive bounds on the guessing number of digraphs. Second, we construct specific digraphs with high guessing numbers, yielding network coding instances where a large amount of information can transit. We first propose a construction of digraphs with finite parameters based on cyclic codes, with guessing number equal to the degree of the generator polynomial. We then construct an infinite class of digraphs with arbitrary girth for which the ratio between the linear guessing number and the number of vertices tends to one, despite these digraphs being arbitrarily sparse. These constructions yield solvable network coding instances with a relatively small number of intermediate nodes for which the node operations are known and linear, although these instances are sparse and the sources are arbitrarily far from their corresponding sinks.The authors are with the
The prediction of protein secondary structure by use of carefully structured neural networks and multiple sequence alignments has been investigated. Separate networks are used for predicting the three secondary structures alpha-helix, beta-strand, and coil. The networks are designed using a priori knowledge of amino acid properties with respect to the secondary structure and the characteristic periodicity in alpha-helices. Since these single-structure networks all have less than 600 adjustable weights, overfitting is avoided. To obtain a three-state prediction of alpha-helix, beta-strand, or coil, ensembles of single-structure networks are combined with another neural network. This method gives an overall prediction accuracy of 66.3% when using 7-fold cross-validation on a database of 126 nonhomologous globular proteins. Applying the method to multiple sequence alignments of homologous proteins increases the prediction accuracy significantly to 71.3% with corresponding Matthew's correlation coefficients C alpha = 0.59, C beta = 0.52, and Cc = 0.50. More than 72% of the residues in the database are predicted with an accuracy of 80%. It is shown that the network outputs can be interpreted as estimated probabilities of correct prediction, and, therefore, these numbers indicate which residues are predicted with high confidence.
Objectives: Actively following a conversation can be demanding and limited cognitive resources must be allocated to the processing of speech, retaining and encoding the perceived content, and preparing an answer. The aim of the present study was to disentangle the allocation of effort into the effort required for listening (listening effort) and the effort required for retention (memory effort) by means of pupil dilation. Design: Twenty-five normal-hearing German speaking participants underwent a sentence final word identification and recall test, while pupillometry was conducted. The participants’ task was to listen to a sentence in four-talker babble background noise and to repeat the final word afterward. At the end of a list of sentences, they were asked to recall as many of the final words as possible. Pupil dilation was recorded during different list lengths (three sentences versus six sentences) and varying memory load (recall versus no recall). Additionally, the effect of a noise reduction algorithm on performance, listening effort, and memory effort was evaluated. Results: We analyzed pupil dilation both before each sentence (sentence baseline) as well as the dilation in response to each sentence relative to the sentence baseline (sentence dilation). The pupillometry data indicated a steeper increase of sentence baseline under recall compared to no recall, suggesting higher memory effort due to memory processing. This increase in sentence baseline was most prominent toward the end of the longer lists, that is, during the second half of six sentences. Without a recall task, sentence baseline declined over the course of the list. Noise reduction appeared to have a significant influence on effort allocation for listening, which was reflected in generally decreased sentence dilation. Conclusion: Our results showed that recording pupil dilation in a speech identification and recall task provides valuable insights beyond behavioral performance. It is a suitable tool to disentangle the allocation of effort to listening versus memorizing speech.
In this paper, we are interested in memoryless computation, a modern paradigm to compute functions which generalises the famous XOR swap algorithm to exchange the contents of two variables without using a buffer. This uses a combinatorial framework for procedural programming languages, where programs are only allowed to update one variable at a time. We first consider programs which do not have any memory. We prove that any function of n variables can be computed this way in only 4n−3 variable updates. We then derive the exact number of instructions required to compute any manipulation of variables. This shows that combining variables, instead of simply moving them around, not only allows for memoryless programs, but also yields shorter programs. Second, we show that allowing programs to use memory is also incorporated in the memoryless computation framework. We then quantify the gains obtained by using memory: this leads to shorter programs and allows us to use only binary instructions, which is not sufficient in general when no memory is used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.