The meteoric growth of data over the internet from the last few years has created a challenge of mining and extracting useful patterns from a large dataset. In recent years, the growth of digital libraries and video databases makes it more challenging and important to extract useful information from raw data to prevent and detect the crimes from the database automatically. Street crime snatching and theft detection is the major challenge in video mining. The main target is to select features/objects which usually occurs at the time of snatching. The number of moving targets imitates the performance, speed and amount of motion in the anomalous video. The dataset used in this paper is Snatch 101; the videos in the dataset are further divided into frames. The frames are labelled and segmented for training. We applied the VGG19 Convolutional Neural Network architecture algorithm and extracted the features of objects and compared them with original video features and objects. The main contribution of our research is to create frames from the videos and then label the objects. The objects are selected from frames where we can detect anomalous activities. The proposed system is never used before for crime prediction, and it is computationally efficient and effective as compared to state-of-the-art systems. The proposed system outperformed with 81 % accuracy as compared to stateof-the-art systems.
Feature/edge-preserving noise removal techniques have a strong potential in several application domains including medical image processing. Magnetic resonance (MR) images have a tendency to gain Rician noise during acquisition. In this article, we have presented genetic algorithms based adapted selective non-local means (GASNLM) filter-based scheme for noise suppression of MR images while preserving the image features as much as possible. We have applied GASNLM filter with optimal parameter values for different frequency image regions to remove the noise. Filter parameter values are optimized by genetic algorithm (GA). A change in NLM filter known as selective weight matrix is also proposed to preserve the image features. The results prove soundness of the method. We have compared results with many well known and latest techniques, and the improvements are discussed.
In medical field visualization of the organs is very imperative for accurate diagnosis and treatment of any disease. Brain tumor diagnosis and surgery also required impressive 3D visualization of the brain to the radiologist. Detection and 3D reconstruction of brain tumors from MRI is a computationally time consuming and error-prone task. Proposed system detects and presents a 3D visualization model of the brain and tumor inside which greatly helps the radiologist to effectively diagnose and analyze the brain tumor. We proposed a multi-phase segmentation and visualization technique which overcomes the many problems of 3D volume segmentation methods like lake of fine details. In this system segmentation is done in three different phases which reduces the error chances. The system finds contours for skull, brain and tumor. These contours are stacked over and two novel methods are used to find the 3D visualization models. The results of these techniques, particularly of interpolation based, are impressive. Proposed system is tested against publically available data set [41] and MRI datasets available from MRI & CT center Rawalpindi, Pakistan [42].
Machine learning (ML) algorithms are being adopted rapidly for a range of applications in the finance industry. In this paper, we used a structured dataset of Santander bank, which is published on a data science and machine learning competition site (kaggle.com) to predict whether a customer would make a transaction or not? The dataset consists of two classes, and it is imbalanced. To handle imbalance as well as to achieve the goal of prediction with the least log loss, we used a variety of methods and algorithms. The provided dataset is partitioned into two sets of 200,000 entries each for training and testing. 50% of data is kept hidden on their server for evaluation of the submission. A detailed exploratory data analysis (EDA) of datasets is performed to check the distributions of values. Correlation between features and importance of characteristics is calculated. To calculate the feature importance, random forest and decision trees are used. Furthermore, principal component analysis and linear discriminant analysis are used for dimensionality reduction. We have used 9 different algorithms including logistic regression (LR), Random forests (RF), Decision tree (DT), Multilayer perceptron (MLP), Gradient boosting method (GBM), Category boost (CatBoost), Extreme gradient boosting (XGBoost), Adaptive boosting (Adaboost) and Light gradient boosting (LigtGBM) method on the dataset. We proposed LighGBM as a regression problem on the dataset and it outperforms the stateof-the-art algorithms with 85% accuracy. Later, we have used fine-tune hyperparameters for our dataset and implemented them in combination with the LighGBM. This tuning improves performance, and we have achieved 89% accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.