Digital in-line holography (DIH) is broadly used to reconstruct 3D shapes of microscopic objects from their 2D holograms. One of the technical challenges in the reconstruction stage is eliminating the twin image originating from the phase-conjugate wavefront. The twin image removal is typically formulated as a non-linear inverse problem since the scattering process involved in generating the hologram is irreversible. Conventional phase recovery methods rely on multiple holographic imaging at different distances from the object plane along with iterative algorithms. Recently, end-to-end deep learning (DL) methods are utilized to reconstruct the object wavefront (as a surrogate for the 3D structure of the object) directly from the singleshot in-line digital hologram. However, massive data pairs are required to train the utilize DL model for an acceptable reconstruction precision. In contrast to typical image processing problems, well-curated datasets for in-line digital holography do not exist. The trained models are also highly influenced by the objects' morphological properties, hence can vary from one application to another. Therefore, data collection can be prohibitively laborious and time-consuming, as a critical drawback of using DL methods for DH. In this paper, we propose a novel DL method that takes advantages of the main characteristic of auto-encoders for blind single-shot hologram reconstruction solely based on the captured sample and without the need for a large dataset of samples with available ground truth to train the model. The simulation results demonstrate the superior performance of the proposed method compared to the state-of-the-art methods used for singleshot hologram reconstruction.
Recently, physically unclonable functions (PUFs) have received considerable attention from the research community due to their potential use in security mechanisms for applications such as the Internet of things (IoT). The concept generally employs the fabrication variability and naturally embedded randomness of device characteristics for secure identification. This approach complements and improves upon the conventional cryptographic security algorithms by covering their vulnerability against counterfeiting, cloning attacks, and physical hijacking. In this work, we propose a new identification/authentication mechanism based on a specific implementation of optical PUFs based on electrochemically formed dendritic patterns. Dendritic tags are built by growing unique, complex, and unclonable nano-scaled metallic patterns on highly nonreactive substrates using electrolyte solutions. Dendritic patterns with 3D surfaces are technically impossible to reproduce, hence they can be used as the fingerprints of objects. Current optical PUF-based identification mechanisms rely on image processing methods that require high-complexity computations and massive storage and communication capacity to store and exchange high-resolution image databases in large-scale networks. To address these issues, we propose a lightweight identification algorithm that converts the images of dendritic patterns into representative graphs and uses a graph-matching approach for device identification. More specifically, we develop a probabilistic graph matching algorithm that makes linkages between the similar feature points in the test and reference graphs while considering the consistency of their local subgraphs. The proposed method demonstrates a high level of accuracy in the presence of imaging artifacts, noise, and skew compared to existing image-based algorithms. The computational complexity of the algorithm grows linearly with the number of extracted feature points and is therefore suitable for large-scale networks.
In this paper, we develop a distributed mechanism for spectrum sharing among a network of unmanned aerial vehicles (UAV) and licensed terrestrial networks. This method can provide a practical solution for situations where the UAV network may need external spectrum when dealing with congested spectrum or need to change its operational frequency due to security threats. Here we study a scenario where the UAV network performs a remote sensing mission. In this model, the UAVs are categorized to two clusters of relaying and sensing UAVs. The relay UAVs provide a relaying service for a licensed network to obtain spectrum access for the rest of UAVs that perform the sensing task. We develop a distributed mechanism in which the UAVs locally decide whether they need to participate in relaying or sensing considering the fact that communications among UAVs may not be feasible or reliable. The UAVs learn the optimal task allocation using a distributed reinforcement learning algorithm. Convergence of the algorithm is discussed and simulation results are presented for different scenarios to verify the convergence 1 .
Wildfires are one of the costliest and deadliest natural disasters in the US, causing damage to millions of hectares of forest resources and threatening the lives of people and animals. Of particular importance are risks to firefighters and operational forces, which highlights the need for leveraging technology to minimize danger to people and property. FLAME (Fire Luminosity Airbornebased Machine learning Evaluation) offers a dataset of aerial images of fires along with methods for fire detection and segmentation which can help firefighters and researchers to develop optimal fire management strategies. This paper provides a fire image dataset collected by drones during a prescribed burning piled detritus in an Arizona pine forest. The dataset includes video recordings and thermal heatmaps captured by infrared cameras. The captured videos and images are annotated, and labeled frame-wise to help researchers easily apply their fire detection and modeling algorithms. The paper also highlights solutions to two machine learning problems: (1) Binary classification of video frames based on the presence [and absence] of fire flames. An Artificial Neural Network (ANN) method is developed that achieved a 76% classification accuracy. (2) Fire detection using segmentation methods to precisely determine fire borders. A deep learning method is designed based on the U-Net
In this paper, we propose a drone-based wildfire monitoring system for remote and hard-to-reach areas. This system utilizes autonomous unmanned aerial vehicles (UAVs) with the main advantage of providing on-demand monitoring service faster than the current approaches of using satellite images, manned aircraft and remotely controlled drones. Furthermore, using autonomous drones facilitates minimizing human intervention in risky wildfire zones. In particular, to develop a fully autonomous system, we propose a distributed leader-follower coalition formation model to cluster a set of drones into multiple coalitions that collectively cover the designated monitoring field. The coalition leader is a drone that employs observer drones potentially with different sensing and imaging capabilities to hover in circular paths and collect imagery information from the impacted areas. The objectives of the proposed system include: i) to cover the entire fire zone with a minimum number of drones, and ii) to minimize the energy consumption and latency of the available drones to fly to the fire zone. Simulation results confirm that the performance of the proposed system-without the need for inter-coalition communications-approaches that of a centrally-optimized system. 1 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.