We show that, the decoherence phenomena applied to the neutrino system could lead us to have an observable breaking of the fundamental CPT symmetry. We require a specific textures of non-diagonal decoherence matrices, with non-zero δCP , for having such observations. Using the information from the CPT conjugate channels: νµ → νµ andνµ →νµ and its corresponding backgrounds, we have estimated the sensitivity of DUNE experiment for testing CPT under the previous conditions. Four scenarios for energy dependent decoherence parameters ΓE ν = Γ × (Eν /GeV) n , n = −1, 0, 1, and 2 are taken into account, for most of them, DUNE is able to achieve a 5σ discovery potential having Γ in O(10 −23 GeV) for δCP = 3π/2. Meanwhile, for δCP = π/2 we reach 3σ for Γ in O(10 −24 GeV).
Abstract:For an automated camera focus, a fast and reliable algorithm is key to its success. It should work in a precisely defined way for as many cases as possible. However, there are many parameters which have to be fine-tuned for it to work exactly as intended. Most literature only focuses on the algorithm itself and tests it with simulations or renderings, but not in real settings. Trying to gather this data by manually placing objects in front of the camera is not feasible, as no human can perform one movement repeatedly in the same way, which makes an objective comparison impossible. We therefore used a small industrial robot with a set of over 250 combinations of movement, pattern, and zoom-states to conduct these tests. The benefit of this method was the objectivity of the data and the monitoring of the important thresholds. Our interest laid in the optimization of an existing algorithm, by showing its performance in as many benchmarks as possible. This included standard use cases and worst-case scenarios. To validate our method, we gathered data from a first run, adapted the algorithm, and conducted the tests again. The second run showed improved performance.
We propose laconic classification as a novel way to understand and compare the performance of diverse image classifiers. The goal in this setting is to minimise the amount of information (aka. entropy) required in individual test images to maintain correct classification. Given a classifier and a test image, we compute an approximate minimal-entropy positive image for which the classifier provides a correct classification, becoming incorrect upon any further reduction. The notion of entropy offers a unifying metric that allows to combine and compare the effects of various types of reductions (e.g., crop, colour reduction, resolution reduction) on classification performance, in turn generalising similar methods explored in previous works. Proposing two complementary frameworks for computing the minimal-entropy positive images of both human and machine classifiers, in experiments over the ILSVRC test-set, we find that machine classifiers are more sensitive entropy-wise to reduced resolution (versus cropping or reduced colour for machines, as well as reduced resolution for humans), supporting recent results suggesting a texture bias in the ILSVRC-trained models used. We also find, in the evaluated setting, that humans classify the minimalentropy positive images of machine models with higher precision than machines classify those of humans. CCS CONCEPTS • Information systems → Multimedia information systems; • Computing methodologies → Neural networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.