2020
DOI: 10.3390/s20030730
|View full text |Cite
|
Sign up to set email alerts
|

Expressure: Detect Expressions Related to Emotional and Cognitive Activities Using Forehead Textile Pressure Mechanomyography

Abstract: We investigate how pressure-sensitive smart textiles, in the form of a headband, can detect changes in facial expressions that are indicative of emotions and cognitive activities. Specifically, we present the Expressure system that performs surface pressure mechanomyography on the forehead using an array of textile pressure sensors that is not dependent on specific placement or attachment to the skin. Our approach is evaluated in systematic psychological experiments. First, through a mimicking expression exper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 62 publications
(69 reference statements)
0
18
0
Order By: Relevance
“…In Table 2 , we have summarized key previously published approaches to non-vision based analysis of facial actions according to employed sensors, number of participants (average of 15.25), the total number of samples (average of 1885.25), number of experiment repetitions (average 2), placement of the sensors (typical glasses frame positions), set of expressions (average 6.8), and performance results (a direct comparison is not possible). The number of volunteers (20 with five repetitions) used in [ 17 ] is because, in this work, an additional cognitive-load experiment was designed (minimal 20 participants [ 72 ]), in our research, we are only evaluating the facial muscular movements with sound. It is imperative to highlight that our experimental design was never intended to be a psychology experiment, and it is a hardware sensing feasibility evaluation.…”
Section: Evaluation Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In Table 2 , we have summarized key previously published approaches to non-vision based analysis of facial actions according to employed sensors, number of participants (average of 15.25), the total number of samples (average of 1885.25), number of experiment repetitions (average 2), placement of the sensors (typical glasses frame positions), set of expressions (average 6.8), and performance results (a direct comparison is not possible). The number of volunteers (20 with five repetitions) used in [ 17 ] is because, in this work, an additional cognitive-load experiment was designed (minimal 20 participants [ 72 ]), in our research, we are only evaluating the facial muscular movements with sound. It is imperative to highlight that our experimental design was never intended to be a psychology experiment, and it is a hardware sensing feasibility evaluation.…”
Section: Evaluation Resultsmentioning
confidence: 99%
“…This will allow for us to go from mimicked expressions in a lab setting to recognizing real emotions under realistic circumstances. We will also investigate the fusion of differential sound information with other sensing modalities in particular with EMG, (simple) EEG, and our textile pressure sensor arrays based mechanomyography [ 17 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Likewise, Zhou et al [ 47 ] explored how pressure-sensitive smart textiles can help monitor people’s emotional states through changes in facial expressions. They proposed the use of textile pressure mapping arrays integrated into a headband to capture the forehead muscle movements.…”
Section: Related Workmentioning
confidence: 99%
“…These cues are used in associating the emotional state of an individual with an external stimulus. Emotion recognition using speech [ 11 , 12 , 13 ], facial expressions [ 14 , 15 , 16 ] and their fusion [ 17 , 18 ] has been explored. These conventional methods for emotion recognition have limitations such as privacy and camera positioning [ 19 ].…”
Section: Introductionmentioning
confidence: 99%