2022
DOI: 10.1109/tim.2022.3162596
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Recognition of Multiple Surgical Instruments Tips Based on Arrow OBB-YOLO Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…YOLOv8-OBB utilizes the corner points (obtained from handling the ground truth information for object detection or directly provided in the dataset) for internal training purposes. 23) The model treats these points as the ground truth representation of the object during loss calculations and network optimization. While the model trains using corner points internally, it leverages the xywhr format for predicted outputs.…”
Section: Methodsmentioning
confidence: 99%
“…YOLOv8-OBB utilizes the corner points (obtained from handling the ground truth information for object detection or directly provided in the dataset) for internal training purposes. 23) The model treats these points as the ground truth representation of the object during loss calculations and network optimization. While the model trains using corner points internally, it leverages the xywhr format for predicted outputs.…”
Section: Methodsmentioning
confidence: 99%
“…Most traditional methods of instrument detection typically have relied on low-level visual features, such as color and shape, for a simple computer vision task of color segmentation or thresholding [35], [36]. As deep learning approaches using CNNs have been increasing in popularity, many methods are proposed to achieve surgical instrument detection with the extraction of high-level features [7], [19], [20], [37], [38]. Andru et al [7] proposed EndoNet which first adopts CNN trained with labeled surgical images to achieve instrument detection and recognition from the video of endoscopic surgery.…”
Section: A Surgical Instrument Detectionmentioning
confidence: 99%
“…Andru et al [7] proposed EndoNet which first adopts CNN trained with labeled surgical images to achieve instrument detection and recognition from the video of endoscopic surgery. Following this work, more neural network architectures are introduced in instrument detection tasks from surgical scenarios, including an attention-guided CNN for real-time instrument detection in minimally invasive surgery [9], a multi-level feature aggregation network for multi-instrument identification [20], an arrow object bounding box network based on YOLO for identification and localization of instrument tips [38], an anchor-free CNN for instrument detection in robot-assisted surgery [23]. Furthermore, weakly supervised methods for surgical instrument detection are proposed to utilize the images without annotations of bounding boxes, which extends the approach of network training [39].…”
Section: A Surgical Instrument Detectionmentioning
confidence: 99%
“…Object detection is an essential task in computer vision, with numerous applications in various domains, including medical imaging [1], [2], surgical procedures [3], and personal protective equipment detection [4]. It plays a crucial role in medical imaging by enabling the identification and localization of abnormalities or objects of interest within medical images [5], [6].…”
Section: Introductionmentioning
confidence: 99%