A technique for fusing Kaiman filter information has been developed by Jeffrey Uhlmann, Simon Julier, et. al. that addresses the problems that arise from fusing correlated measurements. The researchers have named this technique "covariance intersection" and have presented papers on it at several robotics and control theory conferences. The technique is applicable to these areas because robotic systems often have data flowing between multiple interconnected algorithms with no guarantee that the data flowing into any algorithm are independent.It can be shown that the covariance intersection technique is a log-linear combination of two Gaussian functions and is thus related to Chemoff information. Given this relationship, covariance intersection can be generalized to the fusion of any two probability density functions. One of the selection criteria suggested by the developers for the optimal combination of two Gaussian functions is the minimization of the determinant of the fused covariance, which is equivalent to the minimization of the Shannon information of the fused state. This equivalence justifies the selection of the determinant criterion for may applications of covariance intersection. Given the recognition of a more general rule for the covariance intersection technique, other probabilistic measures, such as the Chernoff information, may be appropriate for other fusion applications. m
Abstract-An important objective for analyzing realworld graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. This paper describes a graph partition challenge with a baseline partition algorithm of sub-quadratic complexity. The algorithm employs rigorous Bayesian inferential methods based on a statistical model that captures characteristics of the real-world graphs. This strong foundation enables the algorithm to address limitations of well-known graph partition approaches such as modularity maximization. This paper describes various aspects of the challenge including: (1) the data sets and streaming graph generator, (2) the baseline partition algorithm with pseudocode, (3) an argument for the correctness of parallelizing the Bayesian inference, (4) different parallel computation strategies such as node-based parallelism and matrix-based parallelism, (5) evaluation metrics for partition correctness and computational requirements, (6) preliminary timing of a Python-based demonstration code and the open source C++ code, and (7) considerations for partitioning the graph in streaming fashion. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at GraphChallenge.org.
The rise of graph analytic systems has created a need for ways to measure and compare the capabilities of these systems. Graph analytics present unique scalability difficulties. The machine learning, high performance computing, and visual analytics communities have wrestled with these difficulties for decades and developed methodologies for creating challenges to move these communities forward. The proposed Subgraph Isomorphism Graph Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a graph challenge that is reflective of many real-world graph analytics processing systems. The Subgraph Isomorphism Graph Challenge is a holistic specification with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. Subgraph isomorphism is amenable to both vertex-centric implementations and array-based implementations (e.g., using the Graph-BLAS.org standard). The computations are simple enough that performance predictions can be made based on simple computing hardware models. The surrounding kernels provide the context for each kernel that allows rigorous definition of both the input and the output for each kernel. Furthermore, since the proposed graph challenge is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Serial implementations in C++, Python, Python with Pandas, Matlab, Octave, and Julia have been implemented and their single threaded performance have been measured. Specifications, data, and software are publicly available at GraphChallenge.org.
Measurements have been made of the reactions yen +e p and v, n +p-p in a detector located an effective distance of 96 m from a neutrino source. These measurements yield directly the energydependent ratio of the neutrino fluxes [@( E(v, ) )/@( E ( v,) )labs incident on the detector. When combined with an estimate of the flux ratio emanating from the source [ e( E( ve )/@( E(v,) )Icalc, the measured ratio provides an upper limit on the strength of mixing between v, and v,. We obtain sin22a <3.4X (90% C.L.) in the limit of large mass difference ~m~ ( = m 1 2 -m 2 2 / ) between neutrino mass eigenstates, m and m 2 ; and an upper limit on the product Am sin2a < 0.43 ev2 in the limit of small mass difference.If the masses of neutrinos are nondegenerate, and if separate lepton number is not exactly conserved, neutrinos of a given flavor will oscillate into neutrinos of another flavor.' In this paper we report measurements of events from the reactions ven -+e -p and v,n +p-p induced in a neutrino detector by wide-band neutrino fluxes @( E ( v, ) ) and @( E (v, ) ), respectively. The measurements yield directly the flux ratio incident on the detector. After subtraction of the estimated flux ratio [@(E(ve ))/@(E(Y,))],,,, initially present in the beam, no evidence for the oscillation v,+v, is present. The data exclude a significant region of the Am2-sin22a space,'where Am2 = 1 m 12-m22 1 in ev2, m and m2 are neutrino mass eigenstates, and sln22a is the strength of mixing between Y, and Y,. The neutrino detector consists of 112 planes of liquid scintillator (each plane 4 m~4 m in area x 8 cm thick)and 224 planes of proportional drift cells (4.2 mX4.2 m in area X 3.8 cm thick) uniformally interspersed. The fine segmentation (1792 scintillator cells and 12096 proportional drift cells) and the pulse-height and timing characteristics of the elements provide determination of event topology, identification of electromagnetic showers, and substantial discrimination through dE/dx measurements between electrons and photons as well as pions and protons. Immediately downstream of the detector is a 30-ton shower counter of area 4 m X 4 m with 12 radiation lengths to provide additional containment of showers from events occurring at the downstream end of the detector. Further downstream is a magnet of aperture 1.8 m X 1.8 mX0.46 m for study of the ~e r~-l o w -~~ region of the v,-induced quasielastic reaction and measurement of the antineutrino contamination present in the incident neutrino beam.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.