Microsoft Kinect sensor has been widely used in many applications since the launch of its first version. Recently, Microsoft released a new version of Kinect sensor with improved hardware. However, the accuracy assessment of the sensor remains to be answered. In this paper, we measure the depth accuracy of the newly released Kinect v2 depth sensor, and obtain a cone model to illustrate its accuracy distribution. We then evaluate the variance of the captured depth values by depth entropy. In addition, we propose a trilateration method to improve the depth accuracy with multiple Kinects simultaneously. The experimental results are provided to ascertain the proposed model and method.
Augmented-reality (AR) technology has been developing rapidly for decades. A recently released cutting-edge AR device, Microsoft HoloLens, has attracted considerable attention with its advanced capabilities. In this paper, we report the design and execution of a series of experiments to quantitatively evaluate HoloLens' performance in head localization, real environment reconstruction, spatial mapping, hologram visualization, and speech recognition. The results show that HoloLens is able to estimate head posture more correctly at low movement speeds, reconstruct the environment most precisely for a flat surface under bright conditions, anchor augmented contents at desired locations most accurately at distances of 1.5 m and 2.5 m, display objects with an average size error of 6.64%, and recognize speech commands with correctness rates of 74.47% and 66.87% for userdefined and system-defined commands, respectively. Discussions are also provided to further explain our work and the limitations of the experiments.
The assistive, adaptive, and rehabilitative applications of EEG-based robot control and navigation are undergoing a major transformation in dimension as well as scope. Under the background of artificial intelligence, medical and nonmedical robots have rapidly developed and have gradually been applied to enhance the quality of people’s lives. We focus on connecting the brain with a mobile home robot by translating brain signals to computer commands to build a brain-computer interface that may offer the promise of greatly enhancing the quality of life of disabled and able-bodied people by considerably improving their autonomy, mobility, and abilities. Several types of robots have been controlled using BCI systems to complete real-time simple and/or complicated tasks with high performances. In this paper, a new EEG-based intelligent teleoperation system was designed for a mobile wall-crawling cleaning robot. This robot uses crawler type instead of the traditional wheel type to be used for window or floor cleaning. For EEG-based system controlling the robot position to climb the wall and complete the tasks of cleaning, we extracted steady state visually evoked potential (SSVEP) from the collected electroencephalography (EEG) signal. The visual stimulation interface in the proposed SSVEP-based BCI was composed of four flicker pieces with different frequencies (e.g., 6 Hz, 7.5 Hz, 8.57 Hz, and 10 Hz). Seven subjects were able to smoothly control the movement directions of the cleaning robot by looking at the corresponding flicker using their brain activity. To solve the multiclass problem, thereby achieving the purpose of cleaning the wall within a short period, the canonical correlation analysis (CCA) classification algorithm had been used. Offline and online experiments were held to analyze/classify EEG signals and use them as real-time commands. The proposed system was efficient in the classification and control phases with an obtained accuracy of 89.92% and had an efficient response speed and timing with a bit rate of 22.23 bits/min. These results suggested that the proposed EEG-based clean robot system is promising for smart home control in terms of completing the tasks of cleaning the walls with efficiency, safety, and robustness.
Very low proportions of publications from low- and middle-income countries (LAMIC) have been proved in multiple fields. Some researchers from these countries believe that there is a biased attitude of editors against their studies. Under-representation of editorial board members from LAMIC were revealed in many research fields. However, it has not been investigated in the field of foot and ankle surgery. The current study aimed to analyze the composition of the editorial board members in leading foot and ankle journals, and to provide the international representation of editorial boards in the field of foot and ankle surgery. Five leading journals in the field of foot and ankle surgery were included. The editorial board members were collected from the official websites of these journals. The countries of board members were classified based on World Bank. The board compositions of the journals were analyzed. In total, 229 editorial board members were identified. These editors were from 29 countries. The United States (29.69%) had the greatest number of editors, followed by the United Kingdom (20.52%), Australia (8.30%), Italy (6.11%), and Germany (5.68%). When the editors were classified by regions, 49.34% of board members were from Europe & Central Asia, followed by North America (31.44%), East Asia & Pacific (14.41%), Latin America & Caribbean (2.62%), and Middle East & North Africa (2.18%). No editors were from South Asia and Sub-Saharan Africa. A total of 217 editors (94.76%) were from high-income countries, followed by upper-middle-income countries (3.06%), and lower-middle-income countries (2.18%). No members were from low-income countries. There is a lack of international representation on editorial boards of leading foot and ankle journals. Editorial board members in the field of foot and ankle surgery are largely composed by editors from high-income countries with sever under-representation of LAMIC.
Three-dimensional (3D) sensing and printing technologies have reshaped our world in recent years. In this article, a comprehensive overview of techniques related to the pipeline from 3D sensing to printing is provided. We compare the latest 3D sensors and 3D printers and introduce several sensing, postprocessing, and printing techniques available from both commercial deployments and published research. In addition, we demonstrate several devices, software, and experimental results of our related projects to further elaborate details of this process. A case study is conducted to further illustrate the possible tradeoffs during the process of this pipeline. Current progress, future research trends, and potential risks of 3D technologies are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.