We propose a novel accelerated variance-reduced gradient method called ANITA for finite-sum optimization. In this paper, we consider both general convex and strongly convex settings. In the general convex setting, ANITA achieves the convergence result O n min 1 + log 1 √ n , log √ n + nL , which improves the previous best result O n min{log 1 , log n} + nL given by Varag (Lan et al., 2019). In particular, for a very wide range of , i.e., ∈ (0,, where is the error tolerance f (xT ) − f * ≤ and n is the number of data samples, ANITA can achieve the optimal convergence result O n + nL matching the lower bound Ω n + nL provided by Woodworth and Srebro (2016). To the best of our knowledge, ANITA is the first accelerated algorithm which can exactly achieve this optimal result O n + nL for general convex finite-sum problems. In the strongly convex setting, we also show that ANITA can achieve the optimal convergence result O n + nL µ log 1 matching the lower bound Ω n + nL µ log 1 provided by Lan and Zhou (2015). Moreover, ANITA enjoys a simpler loopless algorithmic structure unlike previous accelerated algorithms such as Katyusha (Allen-Zhu, 2017) and Varag (Lan et al., 2019) where they use an inconvenient double-loop structure. Finally, the experimental results also show that ANITA converges faster than previous state-of-the-art Varag (Lan et al., 2019), validating our theoretical results and confirming the practical superiority of ANITA.