IMPORTANCEDeep learning-based automatic surgical instrument recognition is an indispensable technology for surgical research and development. However, pixel-level recognition with high accuracy is required to make it suitable for surgical automation. OBJECTIVE To develop a deep learning model that can simultaneously recognize 8 types of surgical instruments frequently used in laparoscopic colorectal operations and evaluate its recognition performance. DESIGN, SETTING, AND PARTICIPANTS This quality improvement study was conducted at a single institution with a multi-institutional data set. Laparoscopic colorectal surgical videos recorded between April 1, 2009, and December 31, 2021, were included in the video data set. Deep learningbased instance segmentation, an image recognition approach that recognizes each object individually and pixel by pixel instead of roughly enclosing with a bounding box, was performed for 8 types of surgical instruments. MAIN OUTCOMES AND MEASURESAverage precision, calculated from the area under the precision-recall curve, was used as an evaluation metric. The average precision represents the number of instances of true-positive, false-positive, and false-negative results, and the mean average precision value for 8 types of surgical instruments was calculated. Five-fold cross-validation was used as the validation method. The annotation data set was split into 5 segments, of which 4 were used for training and the remainder for validation. The data set was split at the per-case level instead of the per-frame level; thus, the images extracted from an intraoperative video in the training set never appeared in the validation set. Validation was performed for all 5 validation sets, and the average mean average precision was calculated. RESULTSIn total, 337 laparoscopic colorectal surgical videos were used. Pixel-by-pixel annotation was manually performed for 81 760 labels on 38 628 static images, constituting the annotation data set. The mean average precisions of the instance segmentation for surgical instruments were 90.9% for 3 instruments, 90.3% for 4 instruments, 91.6% for 6 instruments, and 91.8% for 8 instruments. CONCLUSIONS AND RELEVANCEA deep learning-based instance segmentation model that simultaneously recognizes 8 types of surgical instruments with high accuracy was successfully developed. The accuracy was maintained even when the number of types of surgical instruments increased. This model can be applied to surgical innovations, such as intraoperative navigation and surgical automation.
Lay Summary To prevent intraoperative organ injury, surgeons strive to identify anatomical structures as early and accurately as possible during surgery. The objective of this prospective observational study was to develop artificial intelligence (AI)-based real-time automatic organ recognition models in laparoscopic surgery and to compare its performance with that of surgeons. The time taken to recognize target anatomy between AI and both expert and novice surgeons was compared. The AI models demonstrated faster recognition of target anatomy than surgeons, especially novice surgeons. These findings suggest that AI has the potential to compensate for the skill and experience gap between surgeons.
Background: The preservation of autonomic nerves is the most important factor in maintaining genitourinary function in colorectal surgery; however, these nerves are not clearly recognisable, and their identification is strongly affected by the surgical ability. Therefore, this study aimed to develop a deep learning model for the semantic segmentation of autonomic nerves during laparoscopic colorectal surgery and to experimentally verify the model through intraoperative use and pathological examination. Materials and methods: The annotation data set comprised videos of laparoscopic colorectal surgery. The images of the hypogastric nerve (HGN) and superior hypogastric plexus (SHP) were manually annotated under a surgeon’s supervision. The Dice coefficient was used to quantify the model performance after five-fold cross-validation. The model was used in actual surgeries to compare the recognition timing of the model with that of surgeons, and pathological examination was performed to confirm whether the samples labelled by the model from the colorectal branches of the HGN and SHP were nerves. Results: The data set comprised 12 978 video frames of the HGN from 245 videos and 5198 frames of the SHP from 44 videos. The mean (±SD) Dice coefficients of the HGN and SHP were 0.56 (±0.03) and 0.49 (±0.07), respectively. The proposed model was used in 12 surgeries, and it recognised the right HGN earlier than the surgeons did in 50.0% of the cases, the left HGN earlier in 41.7% of the cases and the SHP earlier in 50.0% of the cases. Pathological examination confirmed that all 11 samples were nerve tissue. Conclusion: An approach for the deep-learning-based semantic segmentation of autonomic nerves was developed and experimentally validated. This model may facilitate intraoperative recognition during laparoscopic colorectal surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.