“…The results of LUAD/LUSC classification are shown in 3. In this section, we use the features extracted from a DenseNet model, 27 as per Hemati et al 9 We can observe that our suggested strategy outperformed earlier approaches for LUAD/LUSC classification by 2% (delivering 88 %), that underlines the performance of attention-pooling and constrastive learning. We have also, employed the SS-CAMIL blocks in this task and it improved the performance to 89%, but since we are not sure whether it has seen the data in the search task (both datasets are from TCGA repository), we did not report those numbers in the table.…”