2020
DOI: 10.3390/app10248924
|View full text |Cite
|
Sign up to set email alerts
|

Using BiLSTM Networks for Context-Aware Deep Sensitivity Labelling on Conversational Data

Abstract: Information privacy is a critical design feature for any exchange system, with privacy-preserving applications requiring, most of the time, the identification and labelling of sensitive information. However, privacy and the concept of “sensitive information” are extremely elusive terms, as they are heavily dependent upon the context they are conveyed in. To accommodate such specificity, we first introduce a taxonomy of four context classes to categorise relationships of terms with their textual surroundings by… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 43 publications
(52 reference statements)
0
5
0
Order By: Relevance
“…The features from the cosine similarity are passed into the BiLSTM model to classify the web pages. RNN is a special variety of normal ANN that demonstrates sequential information using recurrent connections [24]. Basically, it continues with the hidden state that is regarded as "memory" of preceding input.…”
Section: Design Of Bso-bilstm Based Classification Modelmentioning
confidence: 99%
“…The features from the cosine similarity are passed into the BiLSTM model to classify the web pages. RNN is a special variety of normal ANN that demonstrates sequential information using recurrent connections [24]. Basically, it continues with the hidden state that is regarded as "memory" of preceding input.…”
Section: Design Of Bso-bilstm Based Classification Modelmentioning
confidence: 99%
“…Figure 3 illustrates the process of BiLSTM. This is determined by every neuron represent a calculation function of whole prior data (Pogiatzis & Samakovitis, 2020). The input units …,…”
Section: Bilstm-based Feature Extractionmentioning
confidence: 99%
“…Figure 3 illustrates the process of BiLSTM. This is determined by every neuron represent a calculation function of whole prior data (Pogiatzis & Samakovitis, 2020). The input units {},xt1,xt,xt+1 whereas =(),,,,x1x2x3xN, are linked to the hidden unit ht=(),,,h1h2hM in the hidden layer, through links determined using weight matrix WIH.…”
Section: The Proposed Modelmentioning
confidence: 99%
“…Bidirectional LSTMs extend from traditional LSTMs [17], and they could enhance the model performance in terms of sequence classification issues. In situations wherein all time steps of the input sequence have been provided, the Bidirectional LSTMs trains two LSTMs [18] rather than just one on the input sequence: the first one on the input sequence will be as-is, whereas the other is on a reversed copy of the input sequence.…”
Section: Bidirectional Lstmsmentioning
confidence: 99%