2022
DOI: 10.1007/s00521-022-07339-6
|View full text |Cite
|
Sign up to set email alerts
|

Semantic enhanced Top-k similarity search on weighted HIN

Abstract: Similarity searches on heterogeneous information networks (HINs) have attracted wide attention from both industrial and academic areas in recent years; for example, they have been used for friend detection in social networks and collaborator recommendation in coauthor networks. The structural information on the HIN can be captured by multiple metapaths, and people usually utilize metapaths to design methods for similarity search. The rich semantics in HINs are not only structural information but also content s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 51 publications
0
6
0
Order By: Relevance
“…Concretely, these PETL methods insert lightweight adaptation modules into the pertrained models, freeze the pretrained weights, and finetune these modules end-to-end to adapt to downstream tasks. Recent work has verified the effectiveness of these PETL methods on ViT (Jia et al 2022;Zhang, Zhou, and Liu 2022), but we raise the question: Are these modules designed for the language models optimal for the vision models as well?…”
Section: Introductionmentioning
confidence: 88%
See 2 more Smart Citations
“…Concretely, these PETL methods insert lightweight adaptation modules into the pertrained models, freeze the pretrained weights, and finetune these modules end-to-end to adapt to downstream tasks. Recent work has verified the effectiveness of these PETL methods on ViT (Jia et al 2022;Zhang, Zhou, and Liu 2022), but we raise the question: Are these modules designed for the language models optimal for the vision models as well?…”
Section: Introductionmentioning
confidence: 88%
“…NOAH (Zhang, Zhou, and Liu 2022) is a newly proposed PETL method for ViT, which combines the above three module together and performs neural architecture search on hidden dimension h of Adapter, rank r of LoRA, and prompt length l of VPT.…”
Section: Parameter-efficient Transfer Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…For all experiments with SAFT, we use Vit-B [12], a 12 block Vision Transformer, pretrained on the ImageNet-21K dataset as the backbone model. To train our models, we follow in the footsteps of previous works [20], [11], using the AdamW optimizer with a learning rate of 1e-3 and a cosine learning rate scheduler with a decay cycle of 0.9, and a minimum learning rate of 1e-5. For the hyper-parameters, we fix rank at 24, with a scale of 1 for Oxford Flowers and 10 for Caltech101.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…In particular, CLIP [28], a recently developed vision language model, demonstrates its generalization capability through a remarkable zero-shot performance in decoding the high-level semantics from the textual and visual data. In addition, when it comes to transferring knowledge of the pre-trained foundation model to downstream tasks, the prompt engineering [29][30][31][32][33] has shown efficacy. The underlying idea is to learn a suitable textual context as prompts to build around the main text for quarrying the text encoder of the model.…”
Section: Introductionmentioning
confidence: 99%