This paper presents a supervised texton based approach for the accurate segmentation and measurement of ultrasound fetal head (BPD, OFD, HC) and femur (FL). The method consists of several steps. First, a non-linear diffusion technique is utilized to reduce the speckle noise. Then, based on the assumption that cross sectional intensity profiles of skull and femur can be approximated by Gaussian-like curves, a multi-scale and multi-orientation filter bank is designed to extract texton features specific to ultrasound fetal anatomic structure. The extracted texton cues, together with multiscale local brightness, are then built into a unified framework for boundary detection of ultrasound fetal head and femur. Finally, for fetal head, a direct least square ellipse fitting method is used to construct a closed head contour, whilst, for fetal femur a closed contour is produced by connecting the detected femur boundaries. The presented method is demonstrated to be promising for clinical applications. Overall the evaluation results of fetal head segmentation and measurement from our method are comparable with the inter-observer difference of experts, with the best average precision of 96.85%, the maximum symmetric contour distance (MSD) of 1.46 mm, average symmetric contour distance (ASD) of 0.53 mm; while for fetal femur, the overall performance of our method is better than the inter-observer difference of experts, with the average precision of 84.37%, MSD of 2.72 mm and ASD of 0.31mm.
There exist various types of information on retail food packages, including use by date, food product name and so on. The correct coding of use by dates on food packages is vitally important for avoiding potential health risks to customers caused by erroneous mislabelling of use by dates. It is extremely tedious and laborious to check the use by dates coding manually by a human operator, which is prone to generate errors thus an automatic system for validating the correctness of the coding of use by dates is needed. In order to construct such a system, firstly it needs to correctly automatic recognize use by dates on food packages. In this work, we propose a novel dual deep neural networks-based methodology for automatic recognition of use by dates in food package photographs recorded by a camera, which is a combination of two networks: a fully convolutional network for use by date ROI detection and a convolutional recurrent neuron network for date character recognition. The proposed methodology is the first attempt to apply deep learning for automatic use by date recognition. From comprehensive experimental evaluations, it is shown that the proposed method can achieve high accuracies in use by date recognition (more than 95% on our testing dataset), given food package images with varying lighting conditions, poor printing quality and varied textual/pictorial contents collected from multiple real retailer sites.
Bird populations are identified as important biodiversity indicators, so collecting reliable population data is important to ecologists and scientists. However, existing manual monitoring methods are labour-intensive, time-consuming, and potentially error prone. The aim of our work is to develop a reliable automated system, capable of classifying the species of individual birds, during flight, using video data. This is challenging, but appropriate for use in the field, since there is often a requirement to identify in flight, rather than while stationary. We present our work, which uses a new and rich set of appearance features for classification from video. We also introduce motion features including curvature and wing beat frequency. Combined with Normal Bayes classifier and a Support Vector Machine classifier, we present experimental evaluations of our appearance and motion features across a data set comprising 7 species. Using our appearance feature set alone we achieved a classification rate of 92% and 89% (using Normal Bayes and SVM classifiers respectively) which significantly outperforms a recent comparable state-of-the-art system. Using motion features alone we achieved a lower-classification rate, but motivate our on-going work which we seeks to combine these appearance and motion feature to achieve even more robust classification.
The monitoring of bird populations can provide important information on the state of sensitive ecosystems; however, the manual collection of reliable population data is labour-intensive, time-consuming, and potentially error prone. Automated monitoring using computer vision is therefore an attractive proposition, which could facilitate the collection of detailed data on a much larger scale than is currently possible.A number of existing algorithms are able to classify bird species from individual high quality detailed images often using manual inputs (such as a priori parts labelling). However, deployment in the field necessitates fully automated in-flight classification, which remains an open challenge due to poor image quality, high and rapid variation in pose, and similar appearance of some species. We address this as a fine-grained classification problem, and have collected a video dataset of thirteen bird classes (ten species and another with three colour variants) for training and evaluation. We present our proposed algorithm, which selects effective features from a large pool of appearance and motion features. We compare our method to others which use appearance features only, including image classification using state-ofthe-art Deep Convolutional Neural Networks (CNNs). Using our algorithm we achieved an 90% correct classification rate, and we also show that using effectively selected motion and appearance features together can produce results which outperform state-of-the-art single image classifiers. We also show that the most significant motion features improve correct classification rates by 7% compared to using appearance features alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.