2015
DOI: 10.1609/aaai.v29i1.9381
|View full text |Cite
|
Sign up to set email alerts
|

Tackling Mental Health by Integrating Unobtrusive Multimodal Sensing

Abstract: Mental illness is becoming a major plague in modern societies and poses challenges to the capacity of current public health systems worldwide. With the widespread adoption of social media and mobile devices, and rapid advances in artificial intelligence, a unique opportunity arises for tackling mental health problems. In this study, we investigate how users’ online social activities and physiological signals detected through ubiquitous sensors can be utilized in realistic scenarios for monitoring their mental … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(16 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…Comparing the tables, it can be deduced that most of the studies focus on detecting depression followed by depression and anxiety disorder as comorbid conditions. From the tables, it is also notable that most authors have successfully investigated audio and/or facial features for the detection of depression and/or anxiety disorder (Ooi et al 2013 ; Zhou et al 2015 ; Williamson et al 2016 ; Pampouchidou et al 2015 ; Pampouchidou et al 2020 ; Yang et al 2016 , 2017 ; Dham et al 2017 ; Alhanai et al 2018 ; He and Cao 2018 ; Afshan et al 2018 ; Zhu et al 2018 ; Venkataraman 2018 ; Gavrilescu and Vizireanu Aug. 2019 ; Melo et al 2019 ; Victor et al 2019 ; Chlasta et al 2019 ; Guntuku et al 2019 ; Detecting Depression Using a Framework Combining Deep Multimodal Neural Networks with a Purpose-Built Automated Evaluation xxxx ; Vázquez-Romero and Gallardo-Antolín 2020 ; Quatieri et al 2020 ; Shinde et al 2020 ; Zhang et al Jul. 2020 ; Espinola et al 2021 ; Matteo et al 2021 ; Guo et al 2021 ; Albuquerque et al 2021 ).…”
Section: Summarised Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparing the tables, it can be deduced that most of the studies focus on detecting depression followed by depression and anxiety disorder as comorbid conditions. From the tables, it is also notable that most authors have successfully investigated audio and/or facial features for the detection of depression and/or anxiety disorder (Ooi et al 2013 ; Zhou et al 2015 ; Williamson et al 2016 ; Pampouchidou et al 2015 ; Pampouchidou et al 2020 ; Yang et al 2016 , 2017 ; Dham et al 2017 ; Alhanai et al 2018 ; He and Cao 2018 ; Afshan et al 2018 ; Zhu et al 2018 ; Venkataraman 2018 ; Gavrilescu and Vizireanu Aug. 2019 ; Melo et al 2019 ; Victor et al 2019 ; Chlasta et al 2019 ; Guntuku et al 2019 ; Detecting Depression Using a Framework Combining Deep Multimodal Neural Networks with a Purpose-Built Automated Evaluation xxxx ; Vázquez-Romero and Gallardo-Antolín 2020 ; Quatieri et al 2020 ; Shinde et al 2020 ; Zhang et al Jul. 2020 ; Espinola et al 2021 ; Matteo et al 2021 ; Guo et al 2021 ; Albuquerque et al 2021 ).…”
Section: Summarised Studiesmentioning
confidence: 99%
“…Some authors have also scoured and analysed texts from social media such as Twitter, Facebook and Reddit (Thoduparambil 2020 ; Saeedi 2020 ; Xie et al 2020 ; Islam et al 2018 ; Eichstaedt et al 2018 ; Cacheda et al 2019 ; Trotzek et al 2020 ; Owen et al 2020 ; Ramírez-Cifuentes et al 2020 ; Safa et al 2021 ; Tong et al 2022 ; Gupta et al 2022 ; Stankevich et al 2018 , 2020 ; Hussain et al 2019 ; Alsagri and Ykhlef 2020 ). A few authors have explored the combination of audio and textural features (Alhanai et al 2018 ; Park and Moon 2022 ), audio and visual recordings (Yang et al 2017 ; Mallol-Ragolta et al 2020 ; Saidi et al 2020 ) while some others have used unique methods such as a combination of time series signal features (Zhou et al 2015 ), measurement of electrodermal activity (Kim et al 2018 ), magnetic resonance imaging (Kipli et al 2013 ; Yamashita et al 2020 ; Boeke et al 2020 ), kinematic skeleton data (Li et al 2021 ), photo-plethysmogram(PPG) signal features extraction (Khandoker 2017 ), gait characteristics (Wang et al 2021 ) and optical flow visual-based method (Zhu et al 2018 ). Haritha et al ( 2017 ) explored respiratory signals for anxiety detection.…”
Section: Summarised Studiesmentioning
confidence: 99%
“…Regarding visual cues, it is common to use features from the subject's face such as facial expression, head movement, and characteristics from eyes and mouth [27]. For instance, Zhou et al [28] developed an integrated multimodal system that uses webcam video, social media content and keyboard/mouse user interaction to classify depressed patients into two severity levels. However, they only used 5 depressed subjects out of 10, which is a small sample size.…”
Section: Related Workmentioning
confidence: 99%
“…An investigation team from the University of Rochester proposed a model that could link mental states and a set of signals extracted from different sources [17]. These signals included the sentiment analysis of the tweets and tweet replies posted by the users.…”
Section: Tackling Mental Health By Integrating Unobtrusive Multimodal...mentioning
confidence: 99%
“…Even though in the literature there are proposals that use sentiment analysis, these proposals do not use only subjective information of the posts. These proposals combine them with other objective features and have limited information on sentiment, emotion, and other subjective features [17][18][19][20].…”
Section: Introductionmentioning
confidence: 99%