2015 IEEE International Conference on Image Processing (ICIP) 2015
DOI: 10.1109/icip.2015.7350926
|View full text |Cite
|
Sign up to set email alerts
|

Bone extraction in X-ray images by analysis of line fluctuations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…We also see a significant improvement when benchmarking against state-of-the-art classical image processing techniques. Figure 5 shows a comparison between our architecture and the work of Kazeminia et al, 4 who built upon that of Bandyopadhyay et al 5,6 The XNet segmentation produces smoothly connected boundaries around the bone regions, in addition to differentiating well between bone and soft tissue regions. It should be noted that in producing this output, our algorithm was trained on a set of high resolution TIF images, as produced by the X-Ray scanner, whereas this analysis was run on significantly lower quality JPEG images, meaning our network could not achieve its full potential on this image.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…We also see a significant improvement when benchmarking against state-of-the-art classical image processing techniques. Figure 5 shows a comparison between our architecture and the work of Kazeminia et al, 4 who built upon that of Bandyopadhyay et al 5,6 The XNet segmentation produces smoothly connected boundaries around the bone regions, in addition to differentiating well between bone and soft tissue regions. It should be noted that in producing this output, our algorithm was trained on a set of high resolution TIF images, as produced by the X-Ray scanner, whereas this analysis was run on significantly lower quality JPEG images, meaning our network could not achieve its full potential on this image.…”
Section: Resultsmentioning
confidence: 99%
“…Despite our dataset being small compared to those used in many machine learning applications, we achieve an overall accuracy significantly higher, and more generalisable, than work using classical image processing techniques. [2][3][4][5][6]12 Additionally, we show that our architecture outperforms leading image segmentation networks developed for other applications. 13 Our paper is structured as follows: after reviewing a selection of the existing literature in Section 2, we discuss our dataset in Section 3 -how we collect and label the data, and the augmentation methods to prevent overfitting; in Section 4 we address the design and structure of our CNN, covering training and testing stages; the results of the network are discussed in Section 5, where we also address the post-processing stage of false positive reduction; a comparison of our results with other works from the classical and machine learning literature is presented in Section 6, after which we discuss future applications and developments in Section 7.…”
Section: Introductionmentioning
confidence: 86%
See 3 more Smart Citations