Background: Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, a violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. While deep learning-based identification of target structures has been described in basic laparoscopic procedures, feasibility of artificial intelligence-based guidance has not yet been investigated in complex abdominal surgery.
Methods: A dataset of 57 robot-assisted rectal resection (RARR) videos was split into a pre-training dataset of 24 temporally non-annotated videos and a training dataset of 33 temporally annotated videos. Based on phase annotations and pixel-wise annotations of randomly selected image frames, convolutional neural networks were trained to distinguish surgical phases and phase-specifically segment anatomical structures and tissue planes. To evaluate model performance, F1 score, Intersection-over-Union (IoU), precision, recall, and specificity were determined.
Results: We demonstrate that both temporal (average F1 score for surgical phase recognition: 0.78) and spatial features of complex surgeries can be identified using machine learning-based image analysis. Based on analysis of a total of 8797 images with pixel-wise target structure segmentations, mean IoUs for segmentation of anatomical target structures range from 0.09 to 0.82 and from 0.05 to 0.32 for dissection planes and dissection lines throughout different phases of RARR in our analysis.
Conclusions: Image-based recognition is a promising technique for surgical guidance in complex surgical procedures. Future research should investigate clinical applicability, usability, and therapeutic impact of a respective guidance system.