2008
DOI: 10.1103/physreve.77.016104
|View full text |Cite
|
Sign up to set email alerts
|

Identifying network communities with a high resolution

Abstract: Community structure is an important property of complex networks. The automatic discovery of such structure is a fundamental task in many disciplines, including sociology, biology, engineering, and computer science. Recently, several community discovery algorithms have been proposed based on the optimization of a modularity function (Q) . However, the problem of modularity optimization is NP-hard and the existing approaches often suffer from a prohibitively long running time or poor quality. Furthermore, it ha… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
174
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 172 publications
(174 citation statements)
references
References 31 publications
(42 reference statements)
0
174
0
Order By: Relevance
“…We reproduced these results with our version of the Grötschel and Wakabayashi algorithm. The optimal partition for modularity maximization contains the following 5 communities: C m 1 = {1, 2, 3,5,6,7,8,19,29, 30} with 6 n and 4 c, C m 2 = {4, 9,10,11,12,13,14,15,16,17,18,20,21,22,23,24,25,26,27,28,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,54,55,56 and 3 c. These 5 communities consist of two large ones with no (for c) or very few (for l) misclassifications, two small communities with both n and c books and one community with all three categories. We count misclassifications as follows: any l in a community with a majority of c's or n's or conversely counts for 1;…”
Section: Krebs' Political Booksmentioning
confidence: 99%
See 1 more Smart Citation
“…We reproduced these results with our version of the Grötschel and Wakabayashi algorithm. The optimal partition for modularity maximization contains the following 5 communities: C m 1 = {1, 2, 3,5,6,7,8,19,29, 30} with 6 n and 4 c, C m 2 = {4, 9,10,11,12,13,14,15,16,17,18,20,21,22,23,24,25,26,27,28,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,54,55,56 and 3 c. These 5 communities consist of two large ones with no (for c) or very few (for l) misclassifications, two small communities with both n and c books and one community with all three categories. We count misclassifications as follows: any l in a community with a majority of c's or n's or conversely counts for 1;…”
Section: Krebs' Political Booksmentioning
confidence: 99%
“…A large number of heuristics were proposed to maximize modularity. They rely on simulated annealing [16], extremal optimization [17], mean field annealing [18], genetic search [19], dynamical clustering [20], multilevel partitioning [21], contraction-dilation [22], multistep greedy [23], quantum mechanics [24] and a variety of other approaches [25,26,27,28,29,30]. These heuristics provide, usually in moderate time, near optimal partitions for the modularity criterion or, possibly, optimal partitions but without the proof of their optimality.…”
mentioning
confidence: 99%
“…Many heuristics have been proposed, while exact algorithms for modularity maximization are rare. Heuristics are based on agglomerative hierarchical clustering [2][3][4][5][6], simulated annealing [7][8][9], mean field annealing [10], genetic search [11], extremal optimization [12], spectral clustering [13], linear programming followed by randomized rounding [14], dynamical clustering [15], multilevel partitioning [16], contraction-dilation [17], multistep greedy search [18], quantum mechanics [19] and many more [6,[20][21][22][23][24]. These heuristics are able to solve large instances with up to hundred or thousand entities and therefore are often preferred to exact algorithms, even though they do not have a guarantee of optimality.…”
Section: Introductionmentioning
confidence: 99%
“…The partitioning and hybrid heuristics rely upon simulated annealing [24,34,35], mean field annealing [32], genetic search [51], extremal optimization [16], linear programming followed by randomized rounding [1], dynamical clustering [6], multilevel partitioning [15], contraction-dilation [36], multistep greedy search [49], quantum mechanics [44] and many more sources of inspiration [10,50,48,17,31].…”
Section: Introductionmentioning
confidence: 99%