Background The occurrence of bile duct injury (BDI) during laparoscopic cholecystectomy (LC) is an important medical issue. Expert surgeons prevent intraoperative BDI by identifying four landmarks. The present study aimed to develop a system that outlines these landmarks on endoscopic images in real time. Methods An intraoperative landmark indication system was constructed using YOLOv3, which is an algorithm for object detection based on deep learning. The training datasets comprised approximately 2000 endoscopic images of the region of Calot's triangle in the gallbladder neck obtained from 76 videos of LC. The YOLOv3 learning model with the training datasets was applied to 23 videos of LC that were not used in training, to evaluate the estimation accuracy of the system to identify four landmarks: the cystic duct, common bile duct, lower edge of the left medial liver segment, and Rouviere's sulcus. Additionally, we constructed a prototype and used it in a verification experiment in an operation for a patient with cholelithiasis. Results The YOLOv3 learning model was quantitatively and subjectively evaluated in this study. The average precision values for each landmark were as follows: common bile duct: 0.320, cystic duct: 0.074, lower edge of the left medial liver segment: 0.314, and Rouviere's sulcus: 0.101. The two expert surgeons involved in the annotation confirmed consensus regarding valid indications for each landmark in 22 of the 23 LC videos. In the verification experiment, the use of the intraoperative landmark indication system made the surgical team more aware of the landmarks. Conclusions Intraoperative landmark indication successfully identified four landmarks during LC, which may help to reduce the incidence of BDI, and thus, increase the safety of LC. The novel system proposed in the present study may prevent BDI during LC in clinical practice. Keywords Artificial intelligence • Bile duct injury • Deep learning • Landmark • Laparoscopic cholecystectomy Laparoscopic cholecystectomy (LC) is widely accepted worldwide [1]. LC is frequently performed by doctors who specialize in endoscopic surgery, and is considered an introductory level endoscopic surgery [2]. Currently, LC is the standard procedure for cholelithiasis and/or cholecystitis. Previous studies have described the standard procedure of
We have been developing an image-searching method to identify misfiled images in a PACS server. Developing new biological fingerprints (BFs) that would reduce the influence of differences in positioning and breathing phases to improve the performance of recognition is desirable. In our previous studies, the whole lung field (WLF) that included the shadows of the body and lungs was affected by differences in positioning and/or breathing phases. In this study, we showed the usefulness of a circumscribed lung with a rectangular region of interest and the upper half of a chest radiograph as modified BFs. We used 200 images as hypothetically misfiled images. The cross-correlation identifies the resemblance between the BFs in the misfiled images and the corresponding BFs in the database images. The modified BFs indicated better results than did WLF in a receiver operating characteristic analysis; therefore, they could be used as identifiers for patient recognition and identification.
Background
Surgical process modeling automatically identifies surgical phases, and further improvement in recognition accuracy is expected with deep learning. Surgical tool or time series information has been used to improve the recognition accuracy of a model. However, it is difficult to collect this information continuously intraoperatively. The present study aimed to develop a deep convolution neural network (CNN) model that correctly identifies the surgical phase during laparoscopic cholecystectomy (LC).
Methods
We divided LC into six surgical phases (P1–P6) and one redundant phase (P0). We prepared 115 LC videos and converted them to image frames at 3 fps. Three experienced doctors labeled the surgical phases in all image frames. Our deep CNN model was trained with 106 of the 115 annotation datasets and was evaluated with the remaining datasets. By depending on both the prediction probability and frequency for a certain period, we aimed for highly accurate surgical phase recognition in the operation room.
Results
Nine full LC videos were converted into image frames and were fed to our deep CNN model. The average accuracy, precision, and recall were 0.970, 0.855, and 0.863, respectively.
Conclusion
The deep learning CNN model in this study successfully identified both the six surgical phases and the redundant phase, P0, which may increase the versatility of the surgical process recognition model for clinical use. We believe that this model can be used in artificial intelligence for medical devices. The degree of recognition accuracy is expected to improve with developments in advanced deep learning algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.