Orthogonality-based label coding is an often-used technique in multiclass classification. Through coding the labels into some multi-dimensional orthogonal codewords, many binary classifiers can be naturally extended to multi-class cases. For an unseen sample, the classifiers firstly estimate its codeword and then compute the corresponding distances from the labels. Finally, the nearest one is assigned as its class label. However, these classifiers actually hardly guarantee that the estimated codewords still maintain the inter-orthogonality with the other classes, which more likely causes the codewords in different classes overlapping each other to some extent and thus affects the classification performance. Proposed is a novel label correction strategy which aims to keep as much as possible orthogonality between the estimated sample codewords and the other classes' labels in order to preserve further as much as possible the inter-orthogonality of the codewords. The strategy is combined with two state-of-the-art classifiers: regularised least square classifier and the least square support vector machine. Experiments on UCI datasets demonstrate the effectiveness of the method.Introduction: Multi-class classification is widespread in real applications, such as face recognition, text mining and medical analysis [1]. Compared to binary classification, multi-class classification is more delicate, since many existing successful classifiers are basically designed to focus on binary rather than multi-class issues [2]. Up to now, many strategies have been developed to solve this problem, which can fall into three basic categories [2]. The first category is label coding which can extend some binary classifiers to multi-class scenarios directly. By transforming the labels into some codewords, classification is changed into computing the nearest distance between the estimated sample codewords and the labels. The second category is decomposing the multi-class problem into several binary classification tasks that can be efficiently solved using binary classifiers, e.g. the support vector machine [2]. The corresponding strategies involve one-versus-all, all-versus-all and error-correcting output coding. The third category relies on arranging the classes in a hierarchy tree and utilising a number of binary classifiers at the nodes of the tree till a leaf node is reached [2].Orthogonality-based label coding is the most often-used coding format in the first category, where a typical paradigm is one-of-c coding. Through transforming the labels into some orthogonal multidimensional codewords, the coding attempts to maximise the diversity of the classes. Many binary classifiers can directly use these codewords instead of the one-dimensional binary-class labels to solve multi-class classification, and aim to make the estimated sample codewords close to the corresponding label codewords, such as decision trees [3], neural networks [4], the regularised least square classifier (RLSC) [4] and the least square support vector machine (LSSV...