Classification of seismic facies is done by clustering seismic data samples based on their attributes. Year after year, the 3D datasets used in exploration geophysics constantly increase in size, complexity, and number of attributes, requiring a continuous rise in the classification efficiency. In this work, we explore the use of Graphics Processing Units (GPUs) to perform the classification of seismic surveys using the wellestablished machine learning method k-means. We show that the high-performance distributed implementation of the k-means algorithm available at the NVIDIA RAPIDS library can be used to classify facies of large seismic datasets much faster than a classical parallel CPU implementation (up to 258-fold faster in NVIDIA TESLAs GPUs), especially for large seismic blocks. We tested the algorithm with different real seismic volumes, including F3 Netherlands, Parihaka, and Kahu (from 12GB to 66GB).