We propose a quantum classifier, which can classify data under the supervised learning scheme using a quantum feature space. The input feature vectors are encoded in a single quN it (a N level quantum system), as opposed to more commonly used entangled multi-qubit systems. For training we use the much used quantum variational algorithm-a hybrid quantum-classical algorithm, in which the forward part of the computation is performed on a quantum hardware whereas the feedback part is carried out on a classical computer. We introduce "single shot training" in our scheme, with all input samples belonging to the same class being used to train the classifier simultaneously. This significantly speeds up the training procedure and provides an advantage over classical machine learning classifiers. We demonstrate successful classification of popular benchmark datasets with our quantum classifier and compare its performance with respect to some classical machine learning classifiers. We also show that the number of training parameters in our classifier is significantly less than the classical classifiers.
Nonclassicality of quantum states is expressed in many shades, the most stringent of them being a new standard introduced recently in [1]. This is accomplished by expanding the notion of local hidden variables (LHV) to generalised local hidden variables (GLHV), which renders many nonlocal states also classical. We investigate these super-quantum states (called exceptional in [1]) in the family of SU (2) invariant 3 × N level systems. We show that all super-quantum states admit a universal geometrical description, and that they are most likely to lie on a line segment in the manifold, irrespective of the value of N . We also show that though the super -quantum states can be highly mixed, its relative rank with respect to the uniform state is always less than that of a state which admits a GLHV description.
A tensor network is a type of decomposition used to express and approximate large arrays of data. A given dataset, quantum state, or higher-dimensional multilinear map is factored and approximated by a composition of smaller multilinear maps. This is reminiscent to how a Boolean function might be decomposed into a gate array: this represents a special case of tensor decomposition, in which the tensor entries are replaced by 0, 1 and the factorisation becomes exact. The associated techniques are called tensor network methods: the subject developed independently in several distinct fields of study, which have more recently become interrelated through the language of tensor networks. The tantamount questions in the field relate to expressability of tensor networks and the reduction of computational overheads. A merger of tensor networks with machine learning is natural. On the one hand, machine learning can aid in determining a factorisation of a tensor network approximating a data set. On the other hand, a given tensor network structure can be viewed as a machine learning model. Herein the tensor network parameters are adjusted to learn or classify a data-set. In this survey we review the basics of tensor networks and explain the ongoing effort to develop the theory of tensor networks in machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.