As the application fields for digital twins have expanded, various studies have been conducted with the objective of optimizing the costs. Among these studies, research on low-power and low-performance embedded devices has been implemented at a low cost by replicating the performance of existing devices. In this study, we attempt to obtain similar particle count results in a single-sensing device replicated from the particle count results in a multi-sensing device without knowledge of the particle count acquisition algorithm of the multi-sensing device. Through filtering, we suppressed the noise and baseline movements of the raw data of the device. In addition, in the process of determining the multi-threshold for obtaining the particle counts, the existing complex particle count determination algorithm was simplified to make it possible to utilize the look-up table. The proposed simplified particle count calculation algorithm reduced the optimal multi-threshold search time by 87% on average and the root mean square error by 58.5% compared to existing method. In addition, it was confirmed that the distribution of particle count from optimal multi-thresholds has a similar shape to that from multi-sensing devices.
A digital twin is a widely used method that uses digitized simulations of the real-world characteristics because it is effective in predicting results at a low cost. In digital twin analysis, the transfer function between the input and output data is an important research subject. In this study, we intend to investigate the application of the digital twin method to dust particle sensing. A high-performance multichannel reference dust particle sensor provides particle count as well as particulate matter information, whereas a lightweight embedded test device only provides a particle count. The particulate matter acquisition algorithm for a reference device is unknown and complex. Instead of that, we propose a simple method to calculate the transfer function using singular-value decomposition. In the experimental results, using singular-value decomposition, the predicted particulate matter of the test device was similar to that of the reference device. The obtained transfer function shows similar measurement results of the two dust particle sensor devices, confirming that particulate matter environmental information can be digitized even with low-power and lightweight sensor-embedded devices. In addition, the power consumption of the test device was approximately ten times lower than that of the reference device. INDEX TERMS Digital twin, Particle sensing, Particulate matter, Singular-value decomposition.
Arrhythmia is less frequent than a normal heartbeat in an electrocardiogram signal, and the analysis of an electrocardiogram measurement can require more than 24 hours. Therefore, the efficient storage and transmission of electrocardiogram signals have been studied, and their importance has increased recently due to the miniaturization and weight reduction of measurement equipment. The polygonal approximation method based on dynamic programming can effectively achieve signal compression and fiducial point detection by expressing signals with a small number of vertices. However, the execution time and memory area rapidly increase depending on the length of the signal and number of vertices, which are not suitable for lightweight and miniaturized equipment. In this paper, we propose a method that can be applied in embedded environments by optimizing the processing time and memory usage of dynamic programming applied to the polygonal approximation of an ECG signal. The proposed method is divided into three steps to optimize the processing time and memory usage of dynamic programming. The first optimization step is based on the characteristics of electrocardiogram signals in the polygonal approximation. Second, the size of a data bit is used as the threshold for the time difference of each vertex. Finally, a type conversion and memory optimization are applied, which allow real-time processing in embedded environments. After analyzing the performance of the proposed algorithm for a signal length L and number of vertices N , the execution time is reduced from O(L 2 N) to O(L), and the memory usage is reduced from O(L 2 N) to O(LN). In addition, the proposed method preserve a performance of fiducial point detection. In a QT-DB experiment provided by Physionet, achieving values of-4.01 ± 7.99 ms and-5.46 ± 8.03 ms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.