The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques.
Automated Blood Cell Counting instruments are very important tools, daily used by haematologists and medical analysts to perform a Complete Blood Count (CBC). The results of the CBC may be complex to interpret but could lead to important decisions regarding the patient medical treatment. The main focus of this research is oriented to a CBC technique, named White Blood Cell Count (WBCC). Generally, the WBCC is performed by skilled medical operators on peripheral blood smears in order to make a correct count and to obtain useful information such as cell abnormalities or the physical status. The manual WBCC is associated with several challenges, in fact it is a time-consuming, labour intensive and expensive process. This paper introduces a reliable automated WBCC system based on image processing techniques. The main aims are to speed up the and to improve the accuracy of the WBCC process. The proposed automated system introduces a new approach to segment white blood cells taking into account the knowledge acquired from a training set formed of the three main classes elements, the white blood cells, the red blood cells and the plasma present in a blood smear image. The segmented regions containing only the white blood cells are subjected to a further step in which the count is performed using the circular Hough transform exploiting the grey level information. The method has been tested on three different public datasets, in order to highlight the accuracy of the segmentation approach with different colour images and illumination conditions. The experimental results obtained on these datasets demonstrate that the pro-
In this work we describe a fast and stable algorithm for the computation of the orthogonal moments of an image. Indeed, orthogonal moments are characterized by a high discriminative power, but some of their possible formulations are characterized by a large computational complexity, which limits their real-time application. This paper describes in detail an approach based on recurrence relations, and proposes an optimized Matlab implementation of the corresponding computational procedure, aiming to solve the above limitations and put at the community's disposal an efficient and easy to use software. In our experiments we evaluate the effectiveness of the recurrence formulation, as well as its performance for the reconstruction task, in comparison to the closed form representation, often used in the literature. The results show a sensible reduction in the computational complexity, together with a greater accuracy in reconstruction. In order to assess and compare the accuracy of the computed moments in texture analysis, we perform classification experiments on six wellknown databases of texture images. Again, the recurrence formulation performs better in classification than the closed form representation. More importantly, if computed from the GLCM of the image using the proposed stable procedure, the orthogonal moments outperform in some situations some of the most diffused state-of-the-art descriptors for texture classification.
Given the great success of Convolutional Neural Network (CNN) for image representation and classification tasks, we argue that Content-Based Image Retrieval (CBIR) systems could also leverage on CNN capabilities, mainly when Relevance Feedback (RF) mechanisms are employed. On the one hand, to improve the performances of CBIRs, that are strictly related to the effectiveness of the descriptors used to represent an image, as they aim at providing the user with images similar to an initial query image. On the other hand, to reduce the semantic gap between the similarity perceived by the user and the similarity computed by the machine, by exploiting an RF mechanism where the user labels the returned images as being relevant or not concerning her interests. Consequently, in this work, we propose a CBIR system based on transfer learning from a CNN trained on a vast image database, thus exploiting the generic image representation that it has already learned. Then, the pre-trained CNN is also fine-tuned exploiting the RF supplied by the user to reduce the semantic gap. In particular, after the user’s feedback, we propose to tune and then re-train the CNN according to the labelled set of relevant and non-relevant images. Then, we suggest different strategies to exploit the updated CNN for returning a novel set of images that are expected to be relevant to the user’s needs. Experimental results on different data sets show the effectiveness of the proposed mechanisms in improving the representation power of the CNN with respect to the user concept of image similarity. Moreover, the pros and cons of the different approaches can be clearly pointed out, thus providing clear guidelines for the implementation in production environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.