The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
Proceedings of the 2014 IEEE Emerging Technology and Factory Automation (ETFA) 2014
DOI: 10.1109/etfa.2014.7005105
|View full text |Cite
|
Sign up to set email alerts
|

Increasing efficiency of M-out-of-N redundancy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…As scoring evidence could be subjective, we conducted an inter‐rater reliability test (Box 2) to assess the consistency with which different individuals scored the different aspects of the weight and strength of support of a range of pieces of evidence for different assumptions (rating using numerical scores from 0 to 3 based on the Table 1 categories). We found mostly “strong,” or at least “satisfactory,” agreement between individuals in how they applied this scoring system (Finn, 1970; Gamer et al, 2012). As will be discussed later, it is important that the composition of the decision‐making body or group assessing the evidence is as diverse and inclusive as possible, with a range of expertise and experience so that the collation and assessment of evidence (and ultimately decision‐making; Hemming, Burgman, et al, 2018; Hemming, Walshe, et al, 2018) is high quality and not systematically biased against any particular source of evidence.…”
Section: A Process To Assess Assumptions Using Evidencementioning
confidence: 76%
“…As scoring evidence could be subjective, we conducted an inter‐rater reliability test (Box 2) to assess the consistency with which different individuals scored the different aspects of the weight and strength of support of a range of pieces of evidence for different assumptions (rating using numerical scores from 0 to 3 based on the Table 1 categories). We found mostly “strong,” or at least “satisfactory,” agreement between individuals in how they applied this scoring system (Finn, 1970; Gamer et al, 2012). As will be discussed later, it is important that the composition of the decision‐making body or group assessing the evidence is as diverse and inclusive as possible, with a range of expertise and experience so that the collation and assessment of evidence (and ultimately decision‐making; Hemming, Burgman, et al, 2018; Hemming, Walshe, et al, 2018) is high quality and not systematically biased against any particular source of evidence.…”
Section: A Process To Assess Assumptions Using Evidencementioning
confidence: 76%
“…The assessment data was collected separately for the adults' and high school students' speech samples, and the assessment processes are described in detail in [6] and [17]. In these studies, the inter-rater reliability was tested with intraclass correlation coefficient (ICC) using the irr package in R [22]. The average type ICC was selected as reliability measure, since it takes into account the scope of disagreement by comparing individual ratings of a sample to the mean rating of the sample.…”
Section: Speech Data and Human Assessmentsmentioning
confidence: 99%
“…In an initial version, the AIS Class A station available on the ship can be used. In order to increase the availability and reliability of data and communications received, it will be necessary to analyse and evaluate the inclusion of redundant equipment in the system (Gamer et al, 2014).…”
Section: E/e/pe Systemmentioning
confidence: 99%