2023
DOI: 10.1016/j.ipm.2022.103220
|View full text |Cite
|
Sign up to set email alerts
|

Predicting information usefulness in health information identification from modal behaviors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…The F1 scores of these models were all higher than 77%, and they may be used in a variety of contexts, including the identification of health records. In addition, the gesture-based model could accommodate stringent technological or legal requirements; the gazebased model was well-suited to AR, VR, and metaverse uses; and the combined model provided an option for multimodal human-computer interaction [8].…”
Section: Purpose Of Metaversementioning
confidence: 99%
“…The F1 scores of these models were all higher than 77%, and they may be used in a variety of contexts, including the identification of health records. In addition, the gesture-based model could accommodate stringent technological or legal requirements; the gazebased model was well-suited to AR, VR, and metaverse uses; and the combined model provided an option for multimodal human-computer interaction [8].…”
Section: Purpose Of Metaversementioning
confidence: 99%
“…For example, to train a two‐channel neural network for behavior classification, some methods use RGB images and optical flow information 15 . Machine learning theory and practice have confirmed that knowledge can be transferred and shared between related machine‐learning task, and that learning multiple tasks together can lead to better performance than learning each task separately 16‐18 . Video behavior detection mainly focuses on the task of video behavior detection.…”
Section: Introductionmentioning
confidence: 99%
“…This text presents a novel approach to addressing complex problems that are challenging to solve using single‐modal learning in real‐world scenarios. Examples of such problems include audio‐visual speech recognition, 17 multi‐modal emotion recognition, 18 multi‐modal machine translation, 19 visual question answering (VQA), 20 multi‐modal retrieval, 21 and multi‐modal behavior recognition. They then compute feature descriptors and utilize the bag‐of‐visual‐words model to describe behavior, and build a vocabulary of visual words to strengthen the description of behavior 22 .…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation