Face-to-face interactions are important for a variety of individual behaviors and outcomes. In recent years, a number of human sensor technologies have been proposed to incorporate direct observations in behavioral studies of face-to-face interactions. One of the most promising emerging technologies is the application of active Radio Frequency Identification (RFID) badges. They are increasingly applied in behavioral studies because of their low costs, straightforward applicability, and moderate ethical concerns. However, despite the attention that RFID badges have recently received, there is a lack of systematic tests on how valid RFID badges are in measuring face-to-face interactions. With two studies, we aim to fill this gap. Study 1 (N = 11) compares how data assessed with RFID badges correspond with video data of the same interactions (construct validity) and how this fit can be improved using straightforward data processing strategies. The analyses show that the RFID badges have a sensitivity of 50%, which can be enhanced to 65% when flickering signals with gaps of less than 75 s are interpolated. The specificity is relatively less affected by this interpolation process (before interpolation 97%, after interpolation 94.7%)—resulting in an improved accuracy of the measurement. In Study 2 (N = 73) we show that self-report data of social interactions correspond highly with data gathered with the RFID badges (criterion validity).
Face-to-face interactions are important for a variety of individual behaviors and outcomes. In recent years, a number of human sensor technologies have been proposed to incorporate direct observations in behavioral studies of face-to-face interactions. One of the most promising emerging technologies is the application of active Radio Frequency Identification (RFID) badges. They are increasingly applied in behavioral studies because of their low costs, straightforward applicability, and moderate ethical concerns. However, despite the attention that RFID badges have recently received, there is a lack of systematic tests on how valid RFID badges are in measuring face-to-face interactions. With two studies, we aim to fill this gap. Study 1 (N = 11) compares how data assessed with RFID badges correspond with video data of the same interactions (construct validity) and how this fit can be improved using straightforward data processing strategies. The analyses show that the RFID badges have a sensitivity of 50%, which can be enhanced to 65% when flickering signals with gaps of less than 75 s are interpolated. The specificity is relatively less affected by this interpolation process (before interpolation 97%, after interpolation 94.7%)-resulting in an improved accuracy of the measurement. In Study 2 (N = 73) we show that self-report data of social interactions correspond highly with data gathered with the RFID badges (criterion validity).
Deblurring is a fundamental inverse problem in bioimaging. It requires modelling the point spread function (PSF), which captures the optical distortions entailed by the image formation process. The PSF limits the spatial resolution attainable for a given microscope. However, recent applications require a higher resolution, and have prompted the development of super-resolution techniques to achieve sub-pixel accuracy. This requirement restricts the class of suitable PSF models to analog ones. In addition, deblurring is computationally intensive, hence further requiring computationally efficient models. A custom candidate fitting both requirements is the Gaussian model. However, this model cannot capture the rich tail structures found in both theoretical and empirical PSFs. In this paper, we aim at improving the reconstruction accuracy beyond the Gaussian model, while preserving its computational efficiency. We introduce a new class of analog PSF models based on Gaussian mixtures. The number of Gaussian kernels controls both the modelling accuracy and the computational efficiency of the model: the lower the number of kernels, the lower accuracy and the higher efficiency. To explore the accuracy-efficiency trade-off, we propose a variational formulation of the PSF calibration problem, where a convex sparsity-inducing penalty on the number of Gaussian kernels allows trading accuracy for efficiency. We derive an efficient algorithm based on a fully-split formulation of alternating split Bregman. We assess our framework on synthetic and real data and demonstrate a better reconstruction accuracy in both geometry and photometry in point source localisation-a fundamental inverse problem in fluorescence microscopy.Index Terms-Quantitative fluorescence microscopy, modelbased image processing, Bayesian modelling, alternating split Bregman, point spread function, parametric dictionary, virtual microscope framework arXiv:1809.01579v2 [eess.IV]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.