Classification of digital cervical images acquired during visual inspection with acetic acid (VIA) is an important step in automated image-based cervical cancer detection. Many algorithms have been developed for classification of cervical images based on extracting mathematical features and classifying these images. Deciding the suitability of a feature and learning the algorithm is a complex task. On the other hand, convolutional neural networks (CNNs) self-learn most suitable hierarchical features from the raw input image. In this paper, we demonstrate the feasibility of using a shallow layer CNN for classification of image patches of cervical images as cancerous or not cancerous. We used cervix images acquired after the application of 3%-5% acetic acid using an Android device in 102 women. Of these, 42 cervix images belonged in the VIA-positive category (pathologic) and 60 in the VIA-negative category (healthy controls). A total of 275 image patches of 15 × 15 pixels were manually extracted from VIA-positive areas, and we considered these patches as positive examples. Similarly, 409 image patches were extracted from VIA-negative areas and were labeled as VIA negative. These image patches were classified using a shallow layer CNN composed of a layer each of convolutional, rectified linear unit, pooling, and two fully connected layers. A classification accuracy of 100% is achieved using shallow CNN.
Visual inspection with acetic acid (VIA) is an effective, affordable and simple test for cervical cancer screening in resource-poor settings. But considerable expertise is needed to differentiate cancerous lesions from normal lesions, which is lacking in developing countries. Many studies have attempted automation of cervical cancer detection from cervix images acquired during the VIA process. These studies used images acquired through colposcopy or cervicography. However, colposcopy is expensive and hence is not feasible as a screening tool in resource-poor settings. Cervicography uses a digital camera to acquire cervix images which are subsequently sent to experts for evaluation. Hence, cervicography does not provide a real-time decision of whether the cervix is normal or not, during the VIA examination. In case the cervix is found to be abnormal, the patient may be referred to a hospital for further evaluation using Pap smear and/or biopsy. An android device with an inbuilt app to acquire images and provide instant results would be an obvious choice in resource-poor settings. In this paper, we propose an algorithm for analysis of cervix images acquired using an android device, which can be used for the development of decision support system to provide instant decision during cervical cancer screening. This algorithm offers an accuracy of 97.94%, a sensitivity of 99.05% and specificity of 97.16%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.