With the emergence of numerous link prediction methods, how to accurately evaluate them and select the appropriate one has become a key problem that cannot be ignored. Since AUC was first used for link prediction evaluation in 2008, it is arguably the most preferred metric because it well balances the role of wins (the testing link has a higher score than the unobserved link) and the role of draws (they have the same score). However, in many cases, AUC does not show enough discrimination when evaluating link prediction methods, especially those based on local similarity. Hence, we propose a new metric, called W-index, which considers only the effect of wins rather than draws. Our extensive experiments on various networks show that the W-index makes the accuracy scores of link prediction methods more distinguishable, and it can not only widen the local gap of these methods but also enlarge their global distance. We further show the reliability of the W-index by ranking change analysis and correlation analysis. In particular, some community-based approaches, which have been deemed effective, do not show any advantages after our reevaluation. Our results suggest that the W-index is a promising metric for link prediction evaluation, capable of offering convincing discrimination.