SUMMARYIn this paper, two accelerated divide-and-conquer algorithms are proposed for the symmetric tridiagonal eigenvalue problem, which cost O(N 2 r) flops in the worst case, where N is the dimension of the matrix and r is a modest number depending on the distribution of eigenvalues. Both of these algorithms use hierarchically semiseparable (HSS) matrices to approximate some intermediate eigenvector matrices which are Cauchylike matrices and are off-diagonally low-rank. The difference of these two versions lies in using different HSS construction algorithms, one (denoted by ADC1) uses a structured low-rank approximation method and the other (ADC2) uses a randomized HSS construction algorithm. For the ADC2 algorithm, a method is proposed to estimate the off-diagonal rank. Numerous experiments have been done to show their stability and efficiency. These algorithms are implemented in parallel in a shared memory environment, and some parallel implementation details are included. Comparing the ADCs with highly optimized multithreaded libraries such as Intel MKL, we find that ADCs could be more than 6x times faster for some large matrices with few deflations.