Proceedings of the Web Conference 2020 2020
DOI: 10.1145/3366423.3380150
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Implicit Unsupervised Text Hashing using Adversarial Autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Specifically, bit independence means each bit is independent of the others in the output distribution for all hash codes [14]. It can better preserve the original locality structure of the data [8] but may render it unsuitable to calculate the similarity as prior knowledge distillation methods directly. To address this property, we first explore the distribution of bits within one relevance set.…”
Section: Bit Mask Mechanismmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, bit independence means each bit is independent of the others in the output distribution for all hash codes [14]. It can better preserve the original locality structure of the data [8] but may render it unsuitable to calculate the similarity as prior knowledge distillation methods directly. To address this property, we first explore the distribution of bits within one relevance set.…”
Section: Bit Mask Mechanismmentioning
confidence: 99%
“…Desirable hash codes have unique properties in distribution. For example, some semantic hashing models [8,23,37,65] tend to generate "bit independence" hash codes, resulting in each bit being independent of the others. This property can better preserve the original locality structure of the data but may render it unsuitable to calculate the distance between two hash codes as done in prior knowledge distillation methods [49,56,66], which are generally designed for real-value representations.…”
mentioning
confidence: 99%
“…Deep neural networks have recently made a number of breakthroughs and have had great success with wellknown applications and tasks ranging from computer vision (Krizhevsky, Sutskever, and Hinton 2012;He et al 2016) and natural language processing (Devlin et al 2018;Doan and Reddy 2020) to other popular areas such as games (Berner et al 2019;Silver et al 2017), computational advertising (Zhao et al 2020b;Xu et al 2021), and structural biology (AlQuraishi 2019; Doan et al 2021). However, the training of neural networks usually requires large datasets, and at the same time, the whole training process is labor-intensive and resource-intensive.…”
Section: Introductionmentioning
confidence: 99%