“…2: it has a n dimensional input vector x (k) = [x 1 (k) , x 2 (k) , · · · , x n (k)] T and n 3 dimensional output vector y (k) = [y 1 (k) , y 2 (k) , · · · , y n3 (k)] T .The architecture has an output-layer weight vector w (3) n 3 n2 (k) ∈ R n3×1 , a hidden-layer weight vector w (2) n2n1 (k) ∈ R n2×1 , and a context-layer weight vector w (c) n2n2 (k) ∈ R n2×1 . Here n 1 , n 2 and n 3 are the number of neurons in the quantum map layer, the second hidden layer and the output layer, respectively.…”