“…, K are built up incrementally, where K denotes the current number of allocated vectors w. Each w k is attached to a label vector u k , where u k c ∈ {−1, 0, +1} is the model target output for category c, representing positive, negative, and missing label output, respectively. Each cLVQ node w k can therefore represent several categories c. For the category-specific distance computation d c we use a weighted Euclidean distance with specific weight factors λ cf related to the generalized relevance learning vector quantization (GRLVQ) method proposed by Hammer & Villmann (2002):…”