2018
DOI: 10.48550/arxiv.1803.07054
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimal link prediction with matrix logistic regression

Abstract: We consider the problem of link prediction, based on partial observation of a large network, and on side information associated to its vertices. The generative model is formulated as a matrix logistic regression. The performance of the model is analysed in a highdimensional regime under a structural assumption. The minimax rate for the Frobenius-norm risk is established and a combinatorial estimator based on the penalised maximum likelihood approach is shown to achieve it. Furthermore, it is shown that this ra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 56 publications
0
8
0
Order By: Relevance
“…Our work is also related to a line of works on average-case computational hardness and the statistical and computational trade-offs. The average-case reduction approach has been commonly used to show computational lower bounds for many recent high-dimensional problems, such as testing k-wise independence (Alon et al, 2007), biclustering (Ma and Wu, 2015;Cai et al, 2017;Cai and Wu, 2018), community detection (Hajek et al, 2015), RIP certification (Wang et al, 2016a;Koiran and Zouzias, 2014), matrix completion (Chen, 2015), sparse PCA (Berthet and Rigollet, 2013a,b;Brennan et al, 2018;Gao et al, 2017;Wang et al, 2016b), universal submatrix detection , sparse mixture and robust estimation , a financial model with asymmetry information (Arora et al, 2011), finding dense common subgraphs (Charikar et al, 2018), link prediction (Baldin and Berthet, 2018), online local learning (Awasthi et al, 2015). See also a web of average-case reduction to a number of problems in Brennan et al (2018).…”
Section: Related Literaturementioning
confidence: 99%
“…Our work is also related to a line of works on average-case computational hardness and the statistical and computational trade-offs. The average-case reduction approach has been commonly used to show computational lower bounds for many recent high-dimensional problems, such as testing k-wise independence (Alon et al, 2007), biclustering (Ma and Wu, 2015;Cai et al, 2017;Cai and Wu, 2018), community detection (Hajek et al, 2015), RIP certification (Wang et al, 2016a;Koiran and Zouzias, 2014), matrix completion (Chen, 2015), sparse PCA (Berthet and Rigollet, 2013a,b;Brennan et al, 2018;Gao et al, 2017;Wang et al, 2016b), universal submatrix detection , sparse mixture and robust estimation , a financial model with asymmetry information (Arora et al, 2011), finding dense common subgraphs (Charikar et al, 2018), link prediction (Baldin and Berthet, 2018), online local learning (Awasthi et al, 2015). See also a web of average-case reduction to a number of problems in Brennan et al (2018).…”
Section: Related Literaturementioning
confidence: 99%
“…The first one is to design an optimization algorithm which would achieve a global convergence similarly to [45], but would work with the sparsity inducing penalty and possibly non-quadratic loss function. The second direction is to generalize results of [2] to general inductive matrix completion setting and to obtain a general minimax error bounds for the considered problem.…”
Section: Discussionmentioning
confidence: 99%
“…The combinatorial optimization algorithm that achieves fast recovery rates in the noisy case, is proposed in [39]. In the related setting of the inductive link prediction in graphs [2] the minimax optimal rates in sparse and low-rank regime are obtained and the tradeoffs between the statistical rates and the computational complexity are uncovered.…”
Section: Related Workmentioning
confidence: 99%
“…A number of average-case reductions in the literature have started with different average-case assumptions than the planted clique conjecture. Variants of planted dense subgraph have been used to show hardness in a model of financial derivatives under asymmetric information [ABBG11], link prediction [BB18], finding dense common subgraphs [CNW18] and online local learning of the size of a label set [ACLR15]. Hardness conjectures for random constraint satisfaction problems have been used to show hardness in improper learning complexity [DLSS14], learning DNFs [DSS16] and hardness of approximation [Fei02].…”
Section: Related Work On Statistical-computational Gapsmentioning
confidence: 99%