In this paper we introduce a template-based method for recognizing human actions called Action MACH. Our approach is based on a Maximum Average Correlation Height (MACH) filter. A common limitation of template-based methods is their inability to generate a single template using a collection of examples. MACH is capable of capturing intra-class variability by synthesizing a single ActionMACH filter for a given action class. We generalize the traditional MACH filter to video (3D spatiotemporal volume), and vector valued data. By analyzing the response of the filter in the frequency domain, we avoid the high computational cost commonly incurred in template-based approaches. Vector valued data is analyzed using the Clifford Fourier transform, a generalization of the Fourier transform intended for both scalar and vector-valued data. Finally, we perform an extensive set of experiments and compare our method with some of the most recent approaches in the field by using publicly available datasets, and two new annotated human action datasets which include actions performed in classic feature films and sports broadcast television.
BackgroundElevated pre-operative neutrophil: lymphocyte ratio (NLR) has been identified as a predictor of survival in patients with hepatocellular and colorectal cancer. The aim of this study was to examine the prognostic value of an elevated preoperative NLR following resection for oesophageal cancer.MethodsPatients who underwent resection for oesophageal carcinoma from June 1997 to September 2007 were identified from a local cancer database. Data on demographics, conventional prognostic markers, laboratory analyses including blood count results, and histopathology were collected and analysed.ResultsA total of 294 patients were identified with a median age at diagnosis of 65.2 (IQR 59-72) years. The median pre-operative time of blood sample collection was three days (IQR 1-8). The median neutrophil count was 64.2 × 10-9/litre, median lymphocyte count 23.9 × 10-9/litre, whilst the NLR was 2.69 (IQR 1.95-4.02). NLR did not prove to be a significant predictor of number of involved lymph nodes (Cox regression, p = 0.754), disease recurrence (p = 0.288) or death (Cox regression, p = 0.374). Furthermore, survival time was not significantly different between patients with high (≥ 3.5) or low (< 3.5) NLR (p = 0.49).ConclusionPreoperative NLR does not appear to offer useful predictive ability for outcome, disease-free and overall survival following oesophageal cancer resection.
Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions from RGB and depth camera videos. These methods include space-time trajectory, motion encoding, key poses extraction, space-time occupancy patterns, depth motion maps, and skeleton joints. However, these camera-based approaches are affected by background clutter and illumination changes and applicable to a limited field of view only. Wearable inertial sensors provide a viable solution to these challenges but are subject to several limitations such as location and orientation sensitivity. Due to the complementary trait of the data obtained from the camera and inertial sensors, the utilization of multiple sensing modalities for accurate recognition of human actions is gradually increasing. This paper presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera, depth sensor, and wearable inertial sensors. We extracted the computationally efficient features from the data obtained from RGB-D video camera and inertial body sensors. These features include densely extracted histogram of oriented gradient (HOG) features from RGB/depth videos and statistical signal attributes from wearable sensors data. The proposed human action recognition (HAR) framework is tested on a publicly available multimodal human action dataset UTD-MHAD consisting of 27 different human actions. K-nearest neighbor and support vector machine classifiers are used for training and testing the proposed fusion model for HAR. The experimental results indicate that the proposed scheme achieves better recognition results as compared to the state of the art. The feature-level fusion of RGB and inertial sensors provides the overall best performance for the proposed system, with an accuracy rate of 97.6%.
Mortality rates following PPCI were higher in elderly patients although remained acceptable. Invasively measured shock index before PPCI is the strongest independent predictor of long-term outcome in elderly patients. In addition, predictors of in-hospital mortality were similar across different age groups but differed significantly in relation to longer-term mortality.
The mechanisms of lumen enlargement after stenting involved (1) significant axial redistribution of plaque from the lesion into the reference segments, (2) vessel expansion, and (3) either plaque embolization or compression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.