2019 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE) 2019
DOI: 10.1109/wiecon-ece48653.2019.9019916
|View full text |Cite
|
Sign up to set email alerts
|

Breast Cancer Histopathology Image Classification and Localization using Multiple Instance Learning

Abstract: Breast cancer has the highest mortality among cancers in women. Computer-aided pathology to analyze microscopic histopathology images for diagnosis with an increasing number of breast cancer patients can bring the cost and delays of diagnosis down. Deep learning in histopathology has attracted attention over the last decade of achieving state-of-the-art performance in classification and localization tasks. The convolutional neural network, a deep learning framework, provides remarkable results in tissue images… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 16 publications
(24 reference statements)
0
10
0
2
Order By: Relevance
“…In the context of CPath, these labelled bags often represent annotated slides of far more unlabelled patch instances. 437 As labels at the WSI level are much easier to obtain (and hence more prevalent) than patch-level annotations, MIL has been applied to CPath by a significant number of papers. 62 , 63 , 64 , 267 , 268 , 308 , 316 , 418 , 437 , 438 , 439 , 440 , 441 , 442 , 443 , 444 , 445 , 446 , 447 , 448 , 449 , 450 , 451 , 452 , 453 , 454 , 455 , 456 Since both utilize coarser annotations for training on massive images, MIL is similar to weakly-supervised learning.…”
Section: Model Learning For Cpathmentioning
confidence: 99%
“…In the context of CPath, these labelled bags often represent annotated slides of far more unlabelled patch instances. 437 As labels at the WSI level are much easier to obtain (and hence more prevalent) than patch-level annotations, MIL has been applied to CPath by a significant number of papers. 62 , 63 , 64 , 267 , 268 , 308 , 316 , 418 , 437 , 438 , 439 , 440 , 441 , 442 , 443 , 444 , 445 , 446 , 447 , 448 , 449 , 450 , 451 , 452 , 453 , 454 , 455 , 456 Since both utilize coarser annotations for training on massive images, MIL is similar to weakly-supervised learning.…”
Section: Model Learning For Cpathmentioning
confidence: 99%
“…Being able to reason about the decision-making process is useful to gain better insight into the strengths and weaknesses of DL models. To this end, Patil et al [ 54 ] took a multi-instance learning approach in a weakly supervised manner for the classification of breast cancer histology images. As shown in Figure 7 , each input image is partitioned into multiple smaller patches.…”
Section: Automated Breast Cancer Diagnosismentioning
confidence: 99%
“…MIL trabalha com bags, em Logo o MIL pode ser usado tanto para segmentac ¸ão, quanto para a classificac ¸ão das mesmas, tratando-se de um processo de aprendizagem fracamente supervisionado. Nota-se que a interpretabilidade é um problema recorrente em redes convolucionais [Patil et al 2019], e trata-se de entender o porquê da rede ter dado aquela resposta para determinada imagem. Com isso, [Patil et al 2019] produziu um algoritmo baseado em MIL, onde ele identifica as regiões cancerígenas através das instâncias e para solucionar o problema de interpretabilidade ele mostra na bag cada instância, e as diferencia entre si, de maneira totalmente visual, mostrando por onde a rede aprendeu e quais instâncias teriam material cancerígeno.…”
Section: Mil (Multi Instance Learning)unclassified
“…Em comparac ¸ão com trabalhos relacionados, apresentados na Tabela 2, todos baseados em MIL e breakhis, destacamos que o trabalho [Patil et al 2019] usam outra fórmula de atenc ¸ão onde não temos a sigmóide para refinar o cálculo e fazer com que ele aprenda semelhanc ¸as mais complexas. Já [Das et al 2020] usa imagens (224,244) pixel para suas instâncias, nesse trabalho devido limitac ¸ões de hardware não podemos utilizar imagens muito grandes para o tamanho das instâncias, por isso usamos instâncias de tamanhos (32,32) e (64,64).…”
Section: Pxunclassified