2018
DOI: 10.1155/2018/7104764
|View full text |Cite
|
Sign up to set email alerts
|

Shared Variable Extraction and Hardware Implementation for Nonlinear Boolean Functions Based on Swarm Intelligence

Abstract: To solve the problem of complex relationships among variables and the difficulty of extracting shared variables from nonlinear Boolean functions (NLBFs), an association logic model of variables is established using the classical Apriori rule mining algorithm and the association analysis launched during shared variable extraction (SVE). This work transforms the SVE problem into a traveling salesman problem (TSP) and proposes an SVE based on particle swarm optimization (SVE-PSO) method that combines the associat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…The original QPSO and HQPSO method employ the same parametric setup, except with the difference of the chemotactic step size and swimming length in the bacterial foraging mechanism. The chemotactic step size ( ) was kept at 0.1 in the classical BFO [29]. For the dynamic approximation control strategy, the chemotactic step size shows exponential decline as the bacterial foraging iterative process advances.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The original QPSO and HQPSO method employ the same parametric setup, except with the difference of the chemotactic step size and swimming length in the bacterial foraging mechanism. The chemotactic step size ( ) was kept at 0.1 in the classical BFO [29]. For the dynamic approximation control strategy, the chemotactic step size shows exponential decline as the bacterial foraging iterative process advances.…”
Section: Resultsmentioning
confidence: 99%
“…7with respect to the best output weights in the ELM model. To minimize the constrained optimization function and optimize the parameters of ELM, conventional optimization methods, such as Dynamic Programming (DP), Particle Swarm Optimization (PSO) algorithm, always suffer from the problem of being trapped into local optima [26][27][28][29][30]. Inspired by quantum mechanics, a new version of PSO named Quantum-behaved Particle Swarm Optimization (QPSO) [31] was proposed due to its guaranteed characteristic of global convergence.…”
Section: An Efficient Hybrid Intelligent Optimization Methods Formentioning
confidence: 99%