In today's world, the advancement of telediagnostic equipment plays an essential role to monitor heart disease. The earlier diagnosis of heart disease proliferates the compatibility of treatment of patients and predominantly provides an expeditious diagnostic recommendation from clinical experts. However, the feature extraction is a major challenge for heart disease prediction where the high dimensional data increases the learning time for existing machine learning classifiers. In this article, a novel efficient Internet of Things-based tuned adaptive neuro-fuzzy inference system (TANFIS) classifier has been proposed for accurate prediction of heart disease. Here, the tuning parameters of the proposed TANFIS are optimized through Laplace Gaussian mutation-based moth flame optimization and grasshopper optimization algorithm. The simulation scenario can be carried out using11 different datasets from the UCI repository. The proposed method obtains an accuracy of 99.76% for heart disease prediction and it has been improved upto 5.4% as compared with existing algorithms.
K E Y W O R D Sclassification, grasshopper optimization algorithm, heart disease prediction, internet of things, moth flame optimization 610
In the modern era, the emergence of social networking sites paved the way to the people to upload lot of images online. Social sites like Instagram and Flickr allow users to add semantic information to the images in the form of tags. Often these tags are the firsthand semantic data for retrieving the images from the Internet. When a user searches for images in the web, the images with tags relevant to user query are retrieved. Most of the time, these semantic data are not relevant to the content of the image and hence the user gets irrelevant images in contrast to their intended search. This is more common in facial image search. In this paper, we propose an integrated approach to refine the relevance of retrieved facial images. Using Scale Invariant Feature Extraction (SIFT) the facial features of semantically most relevant image and click-through data of all the retrieved images are used to rank and present a meaningful search result. Along with facial features and click through information, the co-occurrence of related tags is also considered. Also we propose the construction of inverted index structure to improve the search performance.
Iris recognition is one of the most reliable personal identification methods. This paper presents a novel algorithm for iris recognition encompassing iris segmentation, fusion of statistical and co-occurrence features extracted from the curvelet and ridgelet transformed images. In this work, the pupil and iris boundaries are detected by using the equation of circle from three points on its circumference. Using Canny edge detection, the iris radius value is empirically chosen based on rigorous experimentation. Eyelash removal is done by using a horizontal 1-D rank filter. Iris normalization is done by mapping the detected iris region from the polar domain to the rectangular domain and the multi-resolution transforms such as curvelet and ridgelet transforms are applied for multi-resolutional feature extraction. The classification is done using Manhattan distance (Md) and multiclass classifier with logistic function and the two results are compared. Here, the benchmark database CASIA-IRIS-V3 (Interval) is used for identification and recognition. It is observed that the ridgelet transform increases the iris recognition rate.
Iris segmentation is used to locate the valid part of the iris for iris biometrics which is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction and iris identification. A novel algorithm for efficient and accurate iris segmentation is carried out in this system. The pupil boundary is detected by applying the equation of circle by finding three points on its circumference. The reflection within the pupil region (if any) is filled by reducing the radius of the pupil one by one until it reaches to zero. Then calculating the edge points of iris boundaries (left, right, upper and lower) point by taking the fixed value from pupil circumference. The novelty here for eyelids localization can be performed by using '3 points marking' for upper lid and 'edge detector' for lower lid. After that, eyelash removal can be done by Order -Statistic Filtering. Finally, the accurate iris edge region is fitted by calculating the point of intersection between eyelids and eye localization. After edge fitting, the curvelet transform is applied for feature extraction. The Manhattan and Euclidean Distance measures are used to measure the similarity between two images to find the best match. Here, the challenging benchmark database MMU is used for identification and verification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.