2020
DOI: 10.1016/j.compbiolchem.2019.107187
|View full text |Cite
|
Sign up to set email alerts
|

Metabolic networks classification and knowledge discovery by information granulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 23 publications
(25 citation statements)
references
References 66 publications
0
25
0
Order By: Relevance
“…Considering the above issues, we use the [ 32 ] and to guide individual updates. These two objective functions use weight aggregation [ 33 , 34 , 35 ] as the fitness function as shown in ( 6 ), where the smaller the fitness value, the better the individual’s performance. where is a weight coefficient to combine the and .…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Considering the above issues, we use the [ 32 ] and to guide individual updates. These two objective functions use weight aggregation [ 33 , 34 , 35 ] as the fitness function as shown in ( 6 ), where the smaller the fitness value, the better the individual’s performance. where is a weight coefficient to combine the and .…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Embedding-based representation methods are also used for textual data, and there are several effective embedding methods such as Skim-gram [ 14 ], latent semantic indexing (LSI) [ 15 ], latent Dirichlet allocation (LDA) [ 16 ], as well as some variants of them in [ 17 , 18 , 19 ]. Granular Computing paradigm [ 20 , 21 , 22 ] is an embedding method which is powerful especially when dealing with non-conventional data such as graphs, sequences, text documents. However, the embedding representation for textual data is significantly different from categorical data since categorical data is structured, whereas textual data is unstructured.…”
Section: Related Workmentioning
confidence: 99%
“…In order to deal with such domains, five mainstream approaches can be pursued [ 10 ]: Feature generation and/or feature engineering, where numerical features are extracted ad-hoc from structured patterns (e.g., using their properties or via measurements) and can be further merged according to different strategies (e.g., in a multi-modal way [ 11 ]); Ad-hoc dissimilarities in the input space, where custom dissimilarity measures are designed in order to process structured patterns directly in the input domain without moving towards Euclidean (or metric) spaces. Common—possibly parametric—edit distances include the Levenshtein distance [ 12 ] for sequence domains and graph edit distances [ 13 ] for graphs domains; Embedding via information granulation and granular computing [ 3 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ]; Dissimilarity representations [ 26 , 27 , 28 ], where structured patterns are embedded in the Euclidean space according to their pairwise dissimilarities; Kernel methods, where the mapping between the original input space and the Euclidean space exploits positive-definite kernel functions [ 29 , 30 , 31 , 32 , 33 ]. …”
Section: Introductionmentioning
confidence: 99%