“…Automatic text summarization has been progressively improving over the time, initially more focused on extractive and compressive models (Jing and McKeown, 2000;Knight and Marcu, 2002;Clarke and Lapata, 2008;Filippova et al, 2015;Kedzie et al, 2015), and moving more towards compressive and abstractive summarization based on graphs and concept maps (Giannakopoulos, 2009;Ganesan et al, 2010;Falke and Gurevych, 2017) and discourse trees (Gerani et al, 2014), syntactic parse trees (Cheung and Penn, 2014;Wang et al, 2013), and Abstract Meaning Representations (AMR) (Liu et al, 2015;Dohare and Karnick, 2017). Recent work has also adopted machine translation inspired neural seq2seq models for abstractive summarization with advances in hierarchical, distractive, saliency, and graphattention modeling (Rush et al, 2015;Chopra et al, 2016;Nallapati et al, 2016;Chen et al, 2016;Tan et al, 2017).…”