1992
DOI: 10.1016/0743-7315(92)90072-u
|View full text |Cite
|
Sign up to set email alerts
|

A neural network model for finding a near-maximum clique

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

1994
1994
2009
2009

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 11 publications
0
15
0
Order By: Relevance
“…Their metaheuristic is capable of finding the best known solution in almost all the DIMACS graphs in reasonable running time. Other recent (meta)heuristics include GRASP [1,12], scatter search [8], ant colony optimization [11], incomplete dynamic backtracking approach [29], annealed replication heuristic [5], continuous optimization heuristics [7], and neural network [13]. These have been proposed for the MCP or strongly related problems, i.e., the maximum independent (stable) set problem and the minimum vertex cover problem.…”
Section: Maximum Clique Problemmentioning
confidence: 99%
“…Their metaheuristic is capable of finding the best known solution in almost all the DIMACS graphs in reasonable running time. Other recent (meta)heuristics include GRASP [1,12], scatter search [8], ant colony optimization [11], incomplete dynamic backtracking approach [29], annealed replication heuristic [5], continuous optimization heuristics [7], and neural network [13]. These have been proposed for the MCP or strongly related problems, i.e., the maximum independent (stable) set problem and the minimum vertex cover problem.…”
Section: Maximum Clique Problemmentioning
confidence: 99%
“…We run EH enumerating 150 cliques for each graph. The results are compared with the following heuristics based on dynamical systems for maximum clique: Jagota's Continuous Hopfield Dynamics (CHD) and Mean Field Annealing (MFA) [26,27], the Saturated Linear Dynamical Network (SLDN) by Pekergin et al [38], an approximation approach introduced by Funabiki et al (FTL) [19], the Iterative Hopfield Nets (IHN) algorithm by Bertoni et al [6], and the Hopfield Network Learning (HNL) of Wang et al [52]. Moreover, we also compare with other Motzkin-Straus-based heuristics, namely the Replicator Dynamics (RD) [39] and the Annealed Imitation Heuristic (AIH) [42].…”
Section: Experiments On Random Graphsmentioning
confidence: 99%
“…The dynamical equation for generating a clique of a graph is given by (9) (10) where, (compatibility between two subcommands) is one if neuron is connected to neuron (i.e., the two subcommands, and are compatible), zero otherwise; (in graph theory, represents an element of the adjacency matrix, i.e., when , vertex is adjacent to vertex ; and, when , vertex is not adjacent to vertex ); is the number of neurons, and (11) The first term in (9) discourages neuron (subcommand) to have a nonzero output if neuron is not adjacent to neuron The second term in (9) encourages neuron to have a nonzero output if neuron is adjacent to all the other neurons and the output of neuron is zero. Because a neuron which has many adjacent neurons is more likely to belong to a clique, the coefficient is changed by the number of adjacent neurons in our algorithm (see [26] for a detailed description) as and (12) where is defined by (5). The algorithm is described below.…”
Section: A Nnmc With Binary Neuron Functionmentioning
confidence: 99%
“…Specifically, we propose two submethods for this problem: neural-network maximum clique (NNMC) [26] and neural-network cost optimizer (NNCO). The NNMC is a near complexity scheme, and generated optimal solutions for small problem sizes (in terms of the number of microoperations), however, the NNMC did not have an explicit cost function to minimize, hence it failed to give the lowest possible known solutions for large problem sizes (having number of microoperations Since the NNMC partitions the data into compatibility classes (with an insignificant increase in computation time due to increasing problem size), we used this conditioned data as the input to the NNCO method to obtain better solutions for the DEC and IBM microinstruction set.…”
mentioning
confidence: 99%