Multidisciplinary Know-How for Smart-Textiles Developers 2013
DOI: 10.1533/9780857093530.2.329
|View full text |Cite
|
Sign up to set email alerts
|

Signal processing technologies for activity-aware smart textiles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 119 publications
(108 reference statements)
0
5
0
Order By: Relevance
“…Popular methodologies for state estimation of a complex system include the: Maximum Likelihood estimation (ML) [ 20 ], Kalman filters [ 21 ], particle filters [ 22 ], covariance intersection and covariance union techniques [ 23 ], etc. The term Decision fusion mainly refers to the combination of different decisions deriving from different classifiers into a common decision regarding an activity that has occurred [ 24 ]. Some examples of decision fusion include Bayesian inference [ 25 ], Dempster Shafer Inference [ 26 ], abductive reasoning utilizing neural networks [ 27 ] or fuzzy logic [ 28 ], utilization of semantic features for decision making [ 29 ], etc.…”
Section: Data Fusion Techniques In Literature Categorizationmentioning
confidence: 99%
“…Popular methodologies for state estimation of a complex system include the: Maximum Likelihood estimation (ML) [ 20 ], Kalman filters [ 21 ], particle filters [ 22 ], covariance intersection and covariance union techniques [ 23 ], etc. The term Decision fusion mainly refers to the combination of different decisions deriving from different classifiers into a common decision regarding an activity that has occurred [ 24 ]. Some examples of decision fusion include Bayesian inference [ 25 ], Dempster Shafer Inference [ 26 ], abductive reasoning utilizing neural networks [ 27 ] or fuzzy logic [ 28 ], utilization of semantic features for decision making [ 29 ], etc.…”
Section: Data Fusion Techniques In Literature Categorizationmentioning
confidence: 99%
“…Human physical activity refers to static (sitting, standing, lying) postures. [50] [51], transition activities (sit-to-stand, sit-to lie, stand to-walk) [52], [53], Dynamic motions (Walking, Running, stairs climbing, exercising, housed hold chores) [54] [55]. In previous research, Human Activity Recognition (HAR) uses different types of approaches, such as: 1) Computer vision-based HAR-It uses cameras to record the various activities [56] [53] [57] 2) Environmental sensor-based HAR-In order to detect events, sensors and signals such as sound sensors, light sensors or RFID tags are used, [58] [59] 3) Wearable sensorbased HAR-These are the sensors that are mounted on the various parts of the body, such as accelerometers, strain, stretch, and then analyze the information to identify the activities [21] [14], [60], [61] and 4) Time geography-based HAR-Using time and location data to classify human activities [62] [53], [63]- [65].…”
Section: Literature Reviewmentioning
confidence: 99%
“…There is a change in sensor parameters when there is a sudden pressing or twisting of a subject's elbow or hand. The respondent was chosen for the collection of data by age (20)(21)(22)(23)(24)(25) and weight (45)(46)(47)(48)(49)(50)(51)(52)(53)(54)(55) kg. The data is collected at a frequency of 50 milliseconds.…”
Section: A) Data Collectionmentioning
confidence: 99%
“…However, if it has an additional camera or LiDAR, it can navigate itself to a safe place after successfully avoiding the obstacle, if such logic is built-in for that failure. Roggen et al, Luo et al, and Foo et al [41][42][43] performed a study on high-level decision data fusion and concluded that using multiple sensors with data fusion is better than individual sensors without data fusion. In addition to the above, several researchers [27,39,[44][45][46] discovered that every sensor used provides a different type, sometimes unique type of information in the selected environment, which includes the tracked object, avoided object, the autonomous vehicle itself, the world it is being used, and so on and so forth, and the information is provided with differing accuracy and differing details.…”
Section: Multiple Sensors Vs Single Sensormentioning
confidence: 99%
“…Decision or High-level data fusion. At the highest level, the system decides the major tasks and takes decisions based on the fusion of information, which is input from the system features [41,43]. 2.…”
Section: Levels Of Data Fusion Applicationmentioning
confidence: 99%