“…However, it is not difficult for even non-orthopedic surgeons to crop an image around the hip joint. Constructing an object detection model will solve the second and third limitations, but object detection is more difficult than image classification, as it must identify the accurate localization of the object of interest (Feng et al 2019).…”
Background and purpose — Deep-learning approaches based on convolutional neural networks (CNNs) are gaining interest in the medical imaging field. We evaluated the diagnostic performance of a CNN to discriminate femoral neck fractures, trochanteric fractures, and non-fracture using antero-posterior (AP) and lateral hip radiographs.
Patients and methods — 1,703 plain hip AP radiographs and 1,220 plain hip lateral radiographs were included in the total dataset. 150 images each of the AP and lateral views were separated out and the remainder of the dataset was used for training. The CNN made the diagnosis based on: (1) AP radiographs alone, (2) lateral radiographs alone, or (3) both AP and lateral radiographs combined. The diagnostic performance of the CNN was measured by the accuracy, recall, precision, and F1 score. We further compared the CNN’s performance with that of orthopedic surgeons.
Results — The average accuracy, recall, precision, and F1 score of the CNN based on both anteroposterior and lateral radiographs were 0.98, 0.98, 0.98, and 0.98, respectively. The accuracy of the CNN was comparable to, or statistically significantly better than, that of the orthopedic surgeons regardless of radiographic view used. In the CNN model, the accuracy of the diagnosis based on both views was significantly better than the lateral view alone and tended to be better than the AP view alone.
Interpretation — The CNN exhibited comparable or superior performance to that of orthopedic surgeons to discriminate femoral neck fractures, trochanteric fractures, and non-fracture using both AP and lateral hip radiographs.
“…However, it is not difficult for even non-orthopedic surgeons to crop an image around the hip joint. Constructing an object detection model will solve the second and third limitations, but object detection is more difficult than image classification, as it must identify the accurate localization of the object of interest (Feng et al 2019).…”
Background and purpose — Deep-learning approaches based on convolutional neural networks (CNNs) are gaining interest in the medical imaging field. We evaluated the diagnostic performance of a CNN to discriminate femoral neck fractures, trochanteric fractures, and non-fracture using antero-posterior (AP) and lateral hip radiographs.
Patients and methods — 1,703 plain hip AP radiographs and 1,220 plain hip lateral radiographs were included in the total dataset. 150 images each of the AP and lateral views were separated out and the remainder of the dataset was used for training. The CNN made the diagnosis based on: (1) AP radiographs alone, (2) lateral radiographs alone, or (3) both AP and lateral radiographs combined. The diagnostic performance of the CNN was measured by the accuracy, recall, precision, and F1 score. We further compared the CNN’s performance with that of orthopedic surgeons.
Results — The average accuracy, recall, precision, and F1 score of the CNN based on both anteroposterior and lateral radiographs were 0.98, 0.98, 0.98, and 0.98, respectively. The accuracy of the CNN was comparable to, or statistically significantly better than, that of the orthopedic surgeons regardless of radiographic view used. In the CNN model, the accuracy of the diagnosis based on both views was significantly better than the lateral view alone and tended to be better than the AP view alone.
Interpretation — The CNN exhibited comparable or superior performance to that of orthopedic surgeons to discriminate femoral neck fractures, trochanteric fractures, and non-fracture using both AP and lateral hip radiographs.
“…Feng et al [10] mentions that the advances in the development of computer vision algorithms are not only based on deep learning techniques and large data sets, but also relies on advanced parallel computing architectures that enable efficient training of multiple layers of neural networks. Furthermore, a modern GPUs is not only a powerful graphics engine but also a highly parallelized computing processor featuring high throughput and high memory bandwidth for massive parallel algorithms.…”
Section: Graphics Processing Units (Gpus)mentioning
Object detection, one of the most fundamental and challenging problems in computer vision. Nowadays some dedicated embedded systems have emerged as a powerful strategy for deliver high processing capabilities including the NVIDIA Jetson family. The aim of the present work is the recognition of objects in complex rural areas through an embedded system, as well as the verification of accuracy and processing time. For this purpose, a low power embedded Graphics Processing Unit (Jetson Nano) has been selected, which allows multiple neural networks to be run in simultaneous and a computer vision algorithm to be applied for image recognition. As well, the performance of these deep learning neural networks such as ssd-mobilenet v1 and v2, pednet, multiped and ssd-inception v2 has been tested. Moreover, it was found that the accuracy and processing time were in some cases improved when all the models suggested in the research were applied. The pednet network model provides a high performance in pedestrian recognition, however, the sdd-mobilenet v2 and ssd-inception v2 models are better at detecting other objects such as vehicles in complex scenarios.
“…The main idea of object detection is to recognize the object in the input image and find its location [18]. The designed system focuses on detecting handguns in minimum training time with high accuracy results.…”
Section: The Proposed Modelmentioning
confidence: 99%
“…MobileNetV1 [23] presented depthwise separable convolutions (DSC) as an efficient change for other CNN layers. DSC is utilized to decompose the traditional convolution into depthwise convolution and pointwise convolution, in MobileNet [18]. In Depthwise convolution approach, a single convolutional filter is applied for each input channel, whereas the Pointwise convolution performs a 1*1 convolution to combine those separate channels as shown in Figure 1.…”
Many people have been killed indiscriminately by the use of handguns in different countries. Terrorist acts, online fighting games and mentally disturbed people are considered the common reasons for these crimes. A realtime handguns detection surveillance system is built to overcome these bad acts, based on convolutional neural networks (CNNs). This method is focused on the detection of different weapons, such as (handgun and rifles). The identification of handguns from surveillance cameras and images requires monitoring by human supervisor, that can cause errors. To overcome this issue, the designed detection system sends an alert message to the supervisor when a weapon is detected. In the proposed detection system, a pre-trained deep learning model MobileNetV3-SSDLite is used to perform the handgun detection operation. This model has been selected because it is fast and accurate in infering to integrate network for detecting and classifying weapons in images. The experimental result using global handguns datasets of various weapons showed that the use of MobileNetV3 with SSDLite model both enhance the accuracy level in identifying the real time handguns detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.