The fast improvement of deep learning methods resulted in breakthroughs in image classification, however, these models are sensitive to adversarial perturbations, which can cause serious problems. Adversarial attacks try to change the model output by adding noise to the input, in our research we propose a combined defense method against it. Two defense approaches have been evolved in the literature, one robustizes the attacked model for higher accuracy, and the other approach detects the adversarial examples. Only very few papers discuss both approaches, thus our aim was to combine them to obtain a more robust model and to examine the combination, in particular the filtering capability of the detector. Our contribution was that the filtering based on the decision of the detector is able to enhance the accuracy, which was theoretically proved. Besides that, we developed a novel defense method called 2N labeling, where we extended the idea of the NULL labeling method. While the NULL labeling suggests only one new class for the adversarial examples, the 2N labeling method suggests twice as much. The novelty of our idea is that a new extended class is assigned to each original class, as the adversarial version of it, thus it assists the detector and robust classifier as well. The 2N labeling method was compared to competitor methods on two test datasets. The results presented that our method surpassed the others, and it can operate with a constant classification performance regardless of the presence or amplitude of adversarial attacks.