This paper proposes a class of Decentralized Approximate Newton (DEAN) methods for addressing innetwork convex optimization, where nodes in a network seek for a consensus that minimizes the sum of their individual objective functions through local interactions only. The proposed DEAN algorithms allow each node to repeatedly perform a local approximate Newton update, so that the nodes not only jointly track the global Newton direction but also drive each other closer. Under a less restrictive assumption (i.e., local strong convexity) in comparison with the existing second-order methods, the DEAN algorithms enable the nodes to reach a consensus that can be arbitrarily close to the optimum. Moreover, for a particular DEAN algorithm, the nodes linearly converge to a common suboptimal solution with an explicit error bound and we also provide the iteration complexity for the suboptimal solution to achieve any given accuracy. Furthermore, we show that when the problem reduces to a quadratic program, the DEAN algorithms are guaranteed to converge to the exact optimum at a linear rate. Finally, simulations demonstrate the competitive performance of DEAN in convergence speed, accuracy, and efficiency.