Image hashing based on deep convolutional neural networks (CNN), deep hashing, has acquired breakthrough in image retrieval. Although deep features from various CNN layers have various levels of information, most of the existing deep hashing methods extract the feature vector only from the output of the penultimate fully-connected layer, focusing primarily on semantic information whilst ignoring detailed structure information. This calls for research on multi-level hashing, utilizing multi-level features to exploit di↵erent levels of CNN characteristics. To fill this gap, a novel image hashing method, Multi-Level Supervised Hashing with deep feature (MLSH), is proposed in this paper to further exploit multiple levels of deep image features. It uses a multiple-hash-table mechanism to integrate multi-level features extracted from an individual deep convolutional neural network. It takes advantage of the complementarity among multi-level features from various layers of a single deep network. High-level features reveal the semantic content of the image, while low-level features provide the structural information that is missing in high-level features. Instead of simple concatenation, several hash tables are trained individually using di↵erent levels of features from di↵erent layers, which are then integrated for e cient image retrieval. The method has been systematically evaluated through experiments on three image databases, including CIFAR-10, MNIST and NUSWIDE, and
The aim of this paper is to present PhD research that aims to enhance the User Experience by proposing a framework that combines the three core components of: dynamic interfaces; adaptive interfaces; and intelligent interfaces. Initial research into the field has identified a gap at the intersection of these types of interaction. A dynamic interaction understands the user, their device and their physical environment to provide a basic User Experience. An adaptive interaction understands the user's capabilities further to implement an enhanced experience via usability and accessibility whilst recognising the flow of the user and their pipeline. The intelligent interaction builds further upon this through the incorporation of Machine Learning algorithms that assist in making the interface intelligent and provide a personalised experience for each user based upon their end goal. This in turn will reduce a user's cognitive load and enhance their interactive experience with an interface.
Aim: We propose a method for screening full blood count metadata for evidence of communicable and noncommunicable diseases using machine learning (ML). Materials & methods: High dimensional hematology metadata was extracted over an 11-month period from Sysmex hematology analyzers from 43,761 patients. Predictive models for age, sex and individuality were developed to demonstrate the personalized nature of hematology data. Both numeric and raw flow cytometry data were used for both supervised and unsupervised ML to predict the presence of pneumonia, urinary tract infection and COVID-19. Heart failure was used as an objective to prove method generalizability. Results: Chronological age was predicted by a deep neural network with R2: 0.59; mean absolute error: 12; sex with AUROC: 0.83, phi: 0.47; individuality with 99.7% accuracy, phi: 0.97; pneumonia with AUROC: 0.74, sensitivity 58%, specificity 79%, 95% CI: 0.73–0.75, p < 0.0001; urinary tract infection AUROC: 0.68, sensitivity 52%, specificity 79%, 95% CI: 0.67–0.68, p < 0.0001; COVID-19 AUROC: 0.8, sensitivity 82%, specificity 75%, 95% CI: 0.79–0.8, p = 0.0006; and heart failure area under the receiver operator curve (AUROC): 0.78, sensitivity 72%, specificity 72%, 95% CI: 0.77–0.78; p < 0.0001. Conclusion: ML applied to hematology data could predict communicable and noncommunicable diseases, both at local and global levels.
We have developed a knowledge-based multimedia telecare system, based on a multimedia PC connected by ISDN at 128 kbit/s. The user display is a television. Multimedia material is accessed through a browser-based interface. A remote-control handset is used as the main means of interaction, to ensure ease of use and overcome any initial reservations resulting from 'technophobia' on the part of the informal carer. The system was used in 13 family homes and four professional sites in Northern Ireland. The evaluations produced positive comments from the informal carers. There are plans to expand the use of the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.