Authorship verification attempts to decide whether the author of a given set of texts is also the author of a disputed text. In comparison to closed-set and open-set attribution, the most popular tasks in relevant literature, the verification setting has some important advantages. First, it is more general since any attribution problem can be decomposed into a series of verification cases. Then, certain factors that affect the performance of closed-set and open-set attribution, like the candidate set size and the distribution of training texts over the candidate authors have limited impact on authorship verification. It is, therefore, more feasible to estimate the error rate of authorship attribution technology, needed in the framework of forensic applications, when focusing on the verification setting.Recently, there has been increasing interest for authorship verification, mainly due to the PAN shared tasks organized in 2013, 2014, and 2015. Multiple methods were developed and tested in new benchmark corpora covering several languages and genres. This paper presents a review of recent advances in this field focusing on the evaluation results of PAN shared tasks. Moreover, it discusses successes, failures, and open issues.