2022
DOI: 10.1609/aaai.v36i4.20323
|View full text |Cite
|
Sign up to set email alerts
|

Naming the Most Anomalous Cluster in Hilbert Space for Structures with Attribute Information

Abstract: We consider datasets consisting of arbitrarily structured entities (e.g., molecules, sequences, graphs, etc) whose similarity can be assessed with a reproducing ker- nel (or a family thereof). These entities are assumed to additionally have a set of named attributes (e.g.: number_of_atoms, stock_price, etc). These attributes can be used to classify the structured entities in discrete sets (e.g., ‘number_of_atoms < 3’, ‘stock_price ≤ 100’, etc) and can effectively serve as Boolean predicates. Our goal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…A major paradigm in graph structure learning exploits this notion of smoothness to infer a sparse graph from a given set of observations [11], [21]. Specifically, a graph combinatorial Laplacian matrix can be inferred via the following optimization [21]: where F is an N × M matrix of M graph signals, α is regularization parameter,∥·∥ F denotes the Frobenius norm and 1 = [1, …, 1] T .…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A major paradigm in graph structure learning exploits this notion of smoothness to infer a sparse graph from a given set of observations [11], [21]. Specifically, a graph combinatorial Laplacian matrix can be inferred via the following optimization [21]: where F is an N × M matrix of M graph signals, α is regularization parameter,∥·∥ F denotes the Frobenius norm and 1 = [1, …, 1] T .…”
Section: Methodsmentioning
confidence: 99%
“…Noting that . replacing the ℓ 2 -norm with a logarithmic barrier, the optimization in (2) can be solved more efficiently via a more general-purpose formulation with respect to the graph’s adjacency matrix [11]: where Z is an N × N matrix with elements Z i,j = ∥ F i ,: − F j ,: ∥ 2 , i.e., Euclidean distance between signal values on electrodes i and j . The first term in (3) enforces the smoothness constraint in similar way as in the first term in (2), which is based on the equivalence trace( F T LF ) = 0.5 ∥ A ◦ Z ∥ 1 , where ◦ is the Hadamard product [19].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Let F denote an N x M matrix where each column is an observation, a graph signal, and Z an N x N matrix with elements Z i,j = ‖ F i ,: - F j ,: ‖ 2 , i.e., Euclidean distance between signal values on vertices i and j . A graph structure can be inferred from F via the optimization [4]: Where o is the Hadamard product, α and β are regularization parameters, ‖ · ‖ F the Frobenius norm, and 1 = [1,…,1] T . The first term in (2) enforces smoothness by invoking that [4]: Smooth graph signals reside on strongly-connected vertices, thus, the vertices are expected to have smaller distances.…”
Section: Methodsmentioning
confidence: 99%
“…A graph structure can be inferred from F via the optimization [4]: Where o is the Hadamard product, α and β are regularization parameters, ‖ · ‖ F the Frobenius norm, and 1 = [1,…,1] T . The first term in (2) enforces smoothness by invoking that [4]: Smooth graph signals reside on strongly-connected vertices, thus, the vertices are expected to have smaller distances. The second term enforces degrees to be positive and improves overall connectivity.…”
Section: Methodsmentioning
confidence: 99%