2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS) 2018
DOI: 10.1109/iciibms.2018.8549962
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Filtering Algorithm Based on Trust and Information Entropy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…e purpose of this is to prevent false high information quality in this modal information. A large number of studies have found that the divergence of information plays a vital role in the accuracy of information [34]. Next, it is needed to obtain the recognition rate of single-modal information, such as the success recognition rate of speech information α 1 .…”
Section: Trust Degree Evaluation and Reverse Active Interactionmentioning
confidence: 99%
“…e purpose of this is to prevent false high information quality in this modal information. A large number of studies have found that the divergence of information plays a vital role in the accuracy of information [34]. Next, it is needed to obtain the recognition rate of single-modal information, such as the success recognition rate of speech information α 1 .…”
Section: Trust Degree Evaluation and Reverse Active Interactionmentioning
confidence: 99%
“…Zhang and Zhong [18] have proposed a method to design trust model by trust transitivity feature. In [19], the authors have combined the direct trust of users and the indirect trust to obtain the trust similarity by the Pearson similarity formula.…”
Section: Related Workmentioning
confidence: 99%
“…In detail, the agent observes the environment state s t to make a joint action ðx t , y t Þ, which obtains a reward and affects the state of next moment s t+1 . After that, the agent stores the transition tuple ðs t , ðx t , y t Þ, r t , s t+1 Þ in the replay memory D. The transition in D consists of the current state s t , the current action a t , the reward r t calculated by formula (19), and the state of the next decision epoch after the environment receives the action s t+1 . Furthermore, at each episode, minibatches are tokens from D. The network updates parameter θ and minimizes the mean square error as follows:…”
Section: Reward Designmentioning
confidence: 99%