Sensors are devices that quantify the physical aspects of the world around us. This ability is important to gain knowledge about human activities. Human Activity recognition plays an import role in people's everyday life. In order to solve many human-centered problems, such as health care, and individual assistance, the need to infer various simple to complex human activities is prominent. Therefore, having a well defined categorization of sensing technology is essential for the systematic design of human activity recognition systems. By extending the sensor categorization proposed by White, we survey the most prominent research works that utilize different sensing technologies for human activity recognition tasks. To the best of our knowledge, there is no thorough sensor-driven survey that considers all sensor categories in the domain of human activity recognition with respect to the sampled physical properties, including a detailed comparison across sensor categories. Thus, our contribution is to close this gap by providing an insight into the state-of-the-art developments. We identify the limitations with respect to the hardware and software characteristics of each sensor category and draw comparisons based on benchmark features retrieved from the research works introduced in this survey. Finally, we conclude with general remarks and provide future research directions for human activity recognition within the presented sensor categorization. INDEX TERMS Sensor categorization, human activity recognition, public databases for human activity recognition, physical sensors, sensor benchmark.
In this paper, we present a set of extremely efficient and high throughput models for accurate face verification, Mix-FaceNets which are inspired by Mixed Depthwise Convolutional Kernels. Extensive experiment evaluations on Label Face in the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-C datasets have shown the effectiveness of our MixFaceNets for applications requiring extremely low computational complexity. Under the same level of computation complexity (≤ 500M FLOPs), our MixFaceNets outperform MobileFaceNets on all the evaluated datasets, achieving 99.60% accuracy on LFW, 97.05% accuracy on AgeDB-30, 93.60 TAR (at FAR1e-6) on MegaFace, 90.94 TAR (at FAR1e-4) on IJB-B and 93.08 TAR (at FAR1e-4) on IJB-C. With computational complexity between 500M and 1G FLOPs, our MixFaceNets achieved results comparable to the top-ranked models, while using significantly fewer FLOPs and less computation overhead, which proves the practical value of our proposed Mix-FaceNets. All training codes, pre-trained models, and training logs have been made available https:// github.com/ fdbtrs/ mixfacenets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.