“…Baselines: We compare our method with some traditional classification models, including KNN, 72 Logistic Regression, 73 Random Forest, 74 Decision Tree, 75 SVM, 76 Extra Trees, 77 and Naïve Bayes. 78 We then compare against various deep-learning-based state-of-the-art baselines, including VGG16, 79 VGG19, 79 ResNet50, 53 ResNet101, 53 NASNetMobile, 80 NASNetLarge, 80 InceptionV3, 81 InceptionV4, 82 Xception, 83 DenseNet121, 84 CondenseNet74-4, 85 CondenseNet74-8, 85 EfficientNet-B7, 86 ResNet50 + AUGMIX, 1 ResNet101 + AUGMIX, 1 Noisy Student, 7 EfficientNet-B7 + AdvProp, 8 ANT + SIN, 87 ResNet50 + FeatMatch, 88 ResNet101 + FeatMatch, 88 ResNet50 + FMix, 89 ResNet101 + FMix, 89 ResNet50 + CutMix 90 + MoEx, 91 ResNet101 + CutMix 90 + MoEx, 91 Pretrained Transformers, 92 ResNet50 + SmoothMix, 93 and ResNet101 + SmoothMix. 93 Comparison with state-of-the-art methods: For traditional classification models, from Table 1, our methods are better than these traditional methods.…”