Virtual rehabilitation (VR) is a novel motor rehabilitation therapy in which the rehabilitation exercises occurs through interaction with bespoken virtual environments. These virtual environments dynamically adapt their activity to match the therapy progress. Adaptation should be guided by the cognitive and emotional state of the patient, none of which are directly observable. Here, we present our first steps towards inferring non-observable attentional state from unobtrusively observable seated posture, so that this knowledge can later be exploited by a VR platform to modulate its behaviour. The space of seated postures was discretized and 648 pictures of acted representations were exposed to crowd-evaluation to determine attributed state of attention. A semi-supervised classifier based on Naïve Bayes with structural improvement was learnt to unfold a predictive relation between posture and attributed attention. Internal validity was established following a 2×5 cross-fold strategy. Following 4959 votes from crowd, classification accuracy reached a promissory 96.29% (µ±σ = 87.59±6.59) and F-measure reached 82.35% (µ ± σ = 69.72 ± 10.50). With the afforded rate of classification, we believe it is safe to claim posture as a reliable proxy for attributed attentional state. It follows that unobtrusively monitoring posture can be exploited for guiding an intelligent adaptation in a virtual rehabilitation platform. This study further helps to identify critical aspects of posture permitting inference of attention.
Dual cameras with visible-thermal multispectral pairs provide both visual and thermal appearance, thereby enabling detecting pedestrians around the clock in various conditions and applications, including autonomous driving and intelligent transportation systems. However, due to the greatly varying real-world scenarios, the performance of a detector trained on a source dataset might change dramatically when evaluated on another dataset. A large amount of training data is often necessary to guarantee the detection performance in a new scenario. Typically, human annotators need to conduct the data labeling work, which is time-consuming, labor-intensive and unscalable. To overcome the problem, we propose a novel unsupervised transfer learning framework for multispectral pedestrian detection, which adapts a multispectral pedestrian detector to the target domain based on pseudo training labels. In particular, auxiliary detectors are utilized and different label fusion strategies are introduced according to the estimated environmental illumination level. Intermediate domain images are generated by translating the source images to mimic the target ones, acting as a better starting point for the parameter update of the pedestrian detector. The experimental results on the KAIST and FLIR ADAS datasets demonstrate that the proposed method achieves new state-of-the-art performance without any manual training annotations on the target data.
Recently, pedestrian detection using visible-thermal pairs plays a key role in around-the-clock applications, such as public surveillance and autonomous driving. However, the performance of a well-trained pedestrian detector may drop significantly when it is applied to a new scenario. Normally, to achieve a good performance on the new scenario, manual annotation of the dataset is necessary, while it is costly and unscalable. In this work, an unsupervised transfer learning framework is proposed for visible-thermal pedestrian detection tasks. Given well-trained detectors from a source dataset, the proposed framework utilizes an iterative process to generate and fuse training labels automatically, with the help of two auxiliary single-modality detectors (visible and thermal). To achieve label fusion, the knowledge of daytime and nighttime is adopted to assign priorities to labels according to their illumination, which improves the quality of generated training labels. After each iteration, the existing detectors are updated using new training labels. Experimental results demonstrate that the proposed method obtains state-of-the-art performance without any manual training labels on the target dataset.
Summary. Given its virtually algorithmic process, the Fugl-Meyer Assessment (FMA) of motor recovery is prone to automatization reducing subjectivity, alleviating therapists' burden and collaterally reducing costs. Several attempts have been recently reported to achieve such automatization of the FMA. However, a cost-effective solution matching expert criteria is still unfulfilled, perhaps because these attempts are sensor-specific representation of the limb or have thus far rely on a trial and error strategy for building the underpinning computational model. Here, we propose a sensor abstracted representation. In particular, we improve previously reported results in the automatization of FMA by classifying a manifold embedded representation capitalizing on quaternions, and explore a wider range of classifiers. By enhancing the modeling, overall classification accuracy is boosted to 87% (mean: 82% ± 4.53: ) well over the maximum reported in literature thus far 51.03% (mean: 48.72 ± std: 2.10). The improved model brings automatic FMA closer to practical usage with implications for rehabilitation programs both in ward and at home.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.