“…For this extended abstract, we have updated the list of baseline methods by adding new approaches (TCT [Huang et al, 2017], TrAdaB [Huang et al, 2017, DANN [Ganin et al, 2016], CL-TS [Zhou et al, 2015], Bi-PV [Xu and Wan, 2017], BiDRL [Zhou et al, 2016b], WSDNNs, [Zhou et al, 2016a], CLDFA [Xu and Yang, 2017]) which have been published in the cross-domain and cross-lingual arena after our original work [Moreo Fernández et al, 2016], and kept those which performed best in our original evaluation (SCL-MI [Blitzer et al, 2007], SFA [Pan et al, 2010, SDA [Glorot et al, 2011], and SSMC [Xiao and Guo, 2014]). We also consider an upper bound that trains the classifier on the training set of the target domain ("Upper"), and a lower bound that trains the classifier on the source domain and then applies the trained classifier directly in the target domain, i.e., without carrying out any sort of knowledge transfer ("No-Trans").…”