Compressive lensless imagers enable novel applications in an extremely compact device, requiring only a phase or amplitude mask placed close to the sensor. They have been demonstrated for 2D and 3D microscopy, single-shot video, and single-shot hyperspectral imaging; in each case, a compressive-sensing-based inverse problem is solved in order to recover a 3D data-cube from a 2D measurement. Typically, this is accomplished using convex optimization and hand-picked priors. Alternatively, deep learning-based reconstruction methods offer the promise of better priors, but require many thousands of ground truth training pairs, which can be difficult or impossible to acquire. In this work, we propose an unsupervised approach based on untrained networks for compressive image recovery. Our approach does not require any labeled training data, but instead uses the measurement itself to update the network weights. We demonstrate our untrained approach on lensless compressive 2D imaging, single-shot high-speed video recovery using the camera’s rolling shutter, and single-shot hyperspectral imaging. We provide simulation and experimental verification, showing that our method results in improved image quality over existing methods.
Researchers have greatly studied the importance of automatic database user interface generation based on declarative models. The task, domain and user models are three important declarative models on which the user interface can be built. This paper then proposes a framework, i.e., a methodological process and a software prototype to drive the automatic database user interface design and code behind generation from the task, user and domain model combined together. This includes both the user interface and the sound and complete data update, definition and manipulation. The case study used in this paper is Translogistic, a project supported by the Walloon Region that aims to develop a highly capable, competitive and complete combined transport as well as a high value quality logistics.
Background As antibiotic resistance creates a significant global health threat, we need not only to accelerate the development of novel antibiotics but also to develop better treatment strategies using existing drugs to improve their efficacy and prevent the selection of further resistance. We require new tools to rationally design dosing regimens from data collected in early phases of antibiotic and dosing development. Mathematical models such as mechanistic pharmacodynamic drug-target binding explain mechanistic details of how the given drug concentration affects its targeted bacteria. However, there are no available tools in the literature that allow non-quantitative scientists to develop computational models to simulate antibiotic-target binding and its effects on bacteria. Results In this work, we have devised an extension of a mechanistic binding-kinetic model to incorporate clinical drug concentration data. Based on the extended model, we develop a novel and interactive web-based tool that allows non-quantitative scientists to create and visualize their own computational models of bacterial antibiotic target-binding based on their considered drugs and bacteria. We also demonstrate how Rifampicin affects bacterial populations of Tuberculosis bacteria using our vCOMBAT tool. Conclusions The vCOMBAT online tool is publicly available at https://combat-bacteria.org/.
Like time complexity models that have significantly contributed to the analysis and development of fast algorithms, energy complexity models for parallel algorithms are desired as crucial means to develop energy efficient algorithms for ubiquitous multicore platforms. Ideal energy complexity models should be validated on real multicore platforms and applicable to a wide range of parallel algorithms. However, existing energy complexity models for parallel algorithms are either theoretical without model validation or algorithm-specific without ability to analyze energy complexity for a wide-range of parallel algorithms. This paper presents a new general validated energy complexity model for parallel (multithreaded) algorithms. The new model abstracts away possible multicore platforms by their static and dynamic energy of computational operations and data access, and derives the energy complexity of a given algorithm from its work, span and I/O complexity. The new model is validated by different sparse matrix vector multiplication (SpMV) algorithms and dense matrix multiplication (matmul) algorithms running on high performance computing (HPC) platforms (e.g., Intel Xeon and Xeon Phi). The new energy complexity model is able to characterize and compare the energy consumption of SpMV and matmul kernels according to three aspects: different algorithms, different input matrix types and different platforms. The prediction of the new model regarding which algorithm consumes more energy with different inputs on different platforms, is confirmed by the experimental results. In order to improve the usability and accuracy of the new model for a wide range of platforms, arXiv:1605.08222v2 [cs.DC] 4 Oct 2016 the platform parameters of ICE model are provided for eleven platforms including HPC, accelerator and embedded platforms.
Information Systems UI (User Interface) generation from declarative models has been the focus of numerous and various approaches in the human computer interaction community. Typically, the different approaches use the different models based on their singular aspects. This paper proposes a new process that combines the task, domain, and user models taken together to drive the information system user interface design and code behind generation. To this end, we propose a framework, i.e., a methodological process, a meta-model and a software prototype called DB-USE.
Work package 2 (WP2) aims to develop libraries for energy-efficient inter-process communication and data sharing on the EXCESS platforms. The Deliverable D2.4 reports on the final prototype of programming abstractions for energy-efficient interprocess communication. Section 1 is the updated overview of the prototype of programming abstraction and devised power/energy models. The Section 2-6 contain the latest results of the four studies:D2.4: Report on the final prototype of programming abstractions 6 D2.3, this model proposed using Ideal Cache memory model to compute I/O complexity of the algorithms. Besides a case study of SpMV to demonstrate how to apply the ICE model to find energy complexity of parallel algorithms, Deliverable D2.4 also reports a case study to apply the ICE model to Dense Matrix Multiplication (matmul). The model is then validated with both data-intensive (i.e., SpMV) and computationintensive (i.e., matmul) algorithms according to three aspects: different algorithms, different input types/sizes and different platforms. In order to make the reading flow easy to follow, we include in this report a complete study of ICE model along with latest results.D2.4: Report on the final prototype of programming abstractions Contents 1.3 Energy Model on CPU for Lock-free Data-structures in Dynamic EnvironmentsIn this section, we firstly consider the modeling and the analysis of the performance of lockfree data structures. Then, we combine the perfomance analysis with our power model that is introduced in D2.1 [75] and D2.3 [73] to estimate the energy efficiency of lock-free data structures that are used in various settings.Lock-free data structures are based on retry loops and are called by application-specific routines. In contrast to the model and analysis provided in D2.3, we consider here the lock-free data structures in dynamic environments. The size of each of the retry loops, and the size of the application routines invoked in between, are not constant but may change dynamically.We present two analytical frameworks for calculating the performance of lock-free data structures. The new frameworks follow two different approaches. The first framework, the simplest one, is based on queuing theory. It introduces an average-based approach that facilitates a more coarse-grained analysis, with the benefit of being ignorant of size distributions. Because of this independence from the distribution nature it covers a set of complicated designs. The second approach, instantiated with an exponential distribution for the size of the application routines, uses Markov chains, and is tighter because it constructs stochastically the execution, step by step.Both frameworks provide a performance estimate which is close to what we observe in practice. We have validated our analysis on (i) several fundamental lock-free data structures such as stacks, queues, deques and counters, some of them employing dynamic helping mechanisms, and (ii) synthetic tests covering a wide range of possible lock-free designs. We show the ap...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.