Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
High-speed large-scale 3D imaging of neuronal activity poses a major challenge in neuroscience. Here, we demonstrate intrinsically simultaneous functional imaging of neuronal activity at single neuron resolution for an entire Caenorhabditis elegans as well as for the whole-brain of larval zebrafish. Our technique captures dynamics of spiking neurons in volumes of ~700 μm x 700 μm x 200 μm at 20 Hz and its simplicity makes it an attractive tool for high-speed volumetric calcium imaging.
The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm× 40 cm×40 cm of hidden space.
In this paper, we describe an efficient image-based approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. It does not suffer from the computation complexity, limited resolution, or quantization artifacts of previous volumetric approaches. We demonstrate the use of this algorithm in a real-time virtualized reality application running off a small number of video streams.
Figure 1: Light field reconstruction from a single coded projection. We explore sparse reconstructions of 4D light fields from optimized 2D projections using light field atoms as the fundamental building blocks of natural light fields. This example shows a coded sensor image captured with our camera prototype (upper left), and the recovered 4D light field (lower left and center). Parallax is successfully recovered (center insets) and allows for post-capture refocus (right). Even complex lighting effects, such as occlusion, specularity, and refraction, can be recovered, being exhibited by the background, dragon, and tiger, respectively. AbstractLight field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding techniques, including 4D light field compression and denoising.
a) Scene (b) Direct Component (c) Global Component Figure 1: (a) A scene lit by a single source of light.The scene includes a wide variety of physical phenomena that produce complex global illumination effects. We present several methods for separating the (b) direct and (c) global illumination components of the scene using high frequency illumination. In this example, the components were estimated by shifting a single checkerboard pattern 25 times to overcome the optical and resolution limits of the source (projector) and sensor (camera). The direct and global images have been brightness scaled by a factor of 1.25. In theory, the separation can be done using just 2 images. When the separation results are only needed at a resolution that is lower than those of the source and sensor, the separation can be done with a single image. AbstractWe present fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source. In theory, the separation can be done with just two images taken with a high frequency binary illumination pattern and its complement. In practice, a larger number of images are used to overcome the optical and resolution limitations of the camera and the source. The approach does not require the material properties of objects and media in the scene to be known. However, we require that the illumination frequency is high enough to adequately sample the global components received by scene points. We present separation results for scenes that include complex interreflections, subsurface scattering and volumetric scattering. Several variants of the separation approach are also described. When a sinusoidal illumination pattern is used with different phase shifts, the separation can be done using just three images. When the computed images are of lower resolution than the source and the camera, smoothness constraints are used to perform the separation using a single image. Finally, in the case of a static scene that is lit by a simple point source, such as the sun, a moving occluder and a video camera can be used to do the separation. We also show several simple examples of how novel images of a scene can be computed from the separation results.
We propose a new design of complex self-evolving structures that vary over time due to environmental interaction. In conventional 3D printing systems, materials are meant to be stable rather than active and fabricated models are designed and printed as static objects. Here, we introduce a novel approach for simulating and fabricating self-evolving structures that transform into a predetermined shape, changing property and function after fabrication. The new locally coordinated bending primitives combine into a single system, allowing for a global deformation which can stretch, fold and bend given environmental stimulus.
At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.