The frequency of hepatitis C amongst Pakistani donors is the highest in the region while that of hepatitis B is declining gradually. Volunteer donors have lower frequencies of tested infections compared to replacement donors. Compared to neighboring India, syphilis occurs with a similar frequency but HIV is seen less commonly.
Abstract:The issue of sustainability is a vital long-term goal for organizations and as such has formed the basis of much academic research over the last two decades. Organizational sustainability is defined as the ability for an organization to accomplish a range of economic, environmental, and human performance objectives. As one of the most studied topics in organizational science, employee engagement at work is seen as a critical component to achieving sustainable organizational success. In order to better understand the employee engagement discourse, this study examined the keywords that appear in the titles and abstract of the employee engagement research domain using the burst detection and social network analysis techniques. A total of 1406 employee engagement relevant articles that were published from 1990 to 2015 were included and investigated in the study. The results revealed the fading, emerging, and central themes within the employee engagement domain.
Mostly, shape-from-focus algorithms use local averaging using a fixed rectangle window to enhance the initial focus volume. In this linear filtering, the window size affects the accuracy of the depth map. A small window is unable to suppress the noise properly, whereas a large window oversmoothes the object shape. Moreover, the use of any window size smoothes focus values uniformly. Consequently, an erroneous depth map is obtained. In this paper, we suggest the use of iterative 3-D anisotropic nonlinear diffusion filtering (ANDF) to enhance the image focus volume. In contrast to linear filtering, ANDF utilizes the local structure of the focus values to suppress the noise while preserving edges. The proposed scheme is tested using image sequences of synthetic and real objects, and results have demonstrated its effectiveness.
Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions from RGB and depth camera videos. These methods include space-time trajectory, motion encoding, key poses extraction, space-time occupancy patterns, depth motion maps, and skeleton joints. However, these camera-based approaches are affected by background clutter and illumination changes and applicable to a limited field of view only. Wearable inertial sensors provide a viable solution to these challenges but are subject to several limitations such as location and orientation sensitivity. Due to the complementary trait of the data obtained from the camera and inertial sensors, the utilization of multiple sensing modalities for accurate recognition of human actions is gradually increasing. This paper presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera, depth sensor, and wearable inertial sensors. We extracted the computationally efficient features from the data obtained from RGB-D video camera and inertial body sensors. These features include densely extracted histogram of oriented gradient (HOG) features from RGB/depth videos and statistical signal attributes from wearable sensors data. The proposed human action recognition (HAR) framework is tested on a publicly available multimodal human action dataset UTD-MHAD consisting of 27 different human actions. K-nearest neighbor and support vector machine classifiers are used for training and testing the proposed fusion model for HAR. The experimental results indicate that the proposed scheme achieves better recognition results as compared to the state of the art. The feature-level fusion of RGB and inertial sensors provides the overall best performance for the proposed system, with an accuracy rate of 97.6%.
Abstract-In the diagnosis of skin melanoma by analyzing histopathological images, the detection of the melanocytes in the epidermis area is an important step. However, the detection of melanocytes in the epidermis area is difficult because other keratinocytes that are very similar to the melanocytes are also present. This paper proposes a novel computer-aided technique for segmentation of the melanocytes in the skin histopathological images. In order to reduce the local intensity variant, a mean-shift algorithm is applied for the initial segmentation of the image. A local region recursive segmentation algorithm is then proposed to filter out the candidate nuclei regions based on the domain prior knowledge. To distinguish the melanocytes from other keratinocytes in the epidermis area, a novel descriptor, named local double ellipse descriptor (LDED), is proposed to measure the local features of the candidate regions. The LDED uses two parameters: region ellipticity and local pattern characteristics to distinguish the melanocytes from the candidate nuclei regions. Experimental results on 28 different histopathological images of skin tissue with different zooming factors show that the proposed technique provides a superior performance.Index Terms-Histopathological image analysis, image segmentation, local descriptor, object detection, pattern recognition.
One of the major requirements of content based image retrieval (CBIR) systems is to ensure meaningful image retrieval against query images. The performance of these systems is severely degraded by the inclusion of image content which does not contain the objects of interest in an image during the image representation phase. Segmentation of the images is considered as a solution but there is no technique that can guarantee the object extraction in a robust way. Another limitation of the segmentation is that most of the image segmentation techniques are slow and their results are not reliable. To overcome these problems, a bandelet transform based image representation technique is presented in this paper, which reliably returns the information about the major objects found in an image. For image retrieval purposes, artificial neural networks (ANN) are applied and the performance of the system and achievement is evaluated on three standard data sets used in the domain of CBIR.
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.