Designing spectral convolutional networks is a challenging problem in graph learning. ChebNet, one of the early attempts, approximates the spectral convolution using Chebyshev polynomials. GCN simplifies ChebNet by utilizing only the first two Chebyshev polynomials while still outperforming it on real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral convolution. Such conclusions are counter-intuitive in the field of approximation theory, where it is established that the Chebyshev polynomial achieves the optimum convergent rate for approximating a function. In this paper, we revisit the problem of approximating the spectral convolution with Chebyshev polynomials. We show that ChebNet's inferior performance is primarily due to illegal coefficients learnt by ChebNet approximating analytic filter functions, which leads to over-fitting. We then propose ChebNetII, a new GNN model based on Chebyshev interpolation, which enhances the original Chebyshev polynomial approximation while reducing the Runge phenomena. We conducted an extensive experimental study to demonstrate that ChebNetII can learn arbitrary graph spectrum filters and achieve superior performance in both full-and semi-supervised node classification tasks. 1
IntroductionGraph neural networks (GNNs) have received considerable attention in recent years due to their remarkable performance on a variety of graph learning tasks, including social analysis [27,21,33], drug discovery [43,16,28], traffic forecasting [22,3,7], recommendation system [36,40] and computer vision [42,5].Spatial-based and spectral-based graph neural networks (GNNs) are the two primary categories of GNNs. To learn node representations, spatial-based GNNs [18,13,34] often rely on a message propagation and aggregation mechanism between neighboring nodes. Spectral-based methods [8] create spectrum graph convolutions or, equivalently, spectral graph filters, in the spectral domain of the Laplacian matrix. We can further divide spectral-based GNNs into two categories based on whether or not their graph convolutions can be learned.• Predetermined graph convolutions: GCN [18] uses a simplified first-order Chebyshev polynomial as graph convolution, which is proven to be a low-pass filter [1,35,37,44]. APPNP [19] utilizes Personalized PageRank (PPR) to set the graph convolution and achieves a low-pass filter as well [20,44]. GNN-LF/HF [44] designs graph convolutions from the perspective of graph optimization functions, which can simulate high-and low-pass filters. • Learnable graph convolutions: ChebNet [8] approximates the graph convolution with Chebyshev polynomials and learns the convolutional filters via trainable weights of the Chebyshev basis. GPR-GNN [6] uses the Monomial basis to approximate graph convolutions, which can derive high-or low-pass filters. ARMA [2] learns a rational convolutional filter via the family of Auto-Regressive Moving Average filters [24]. BernNet [15] utilizes th...