2022
DOI: 10.1093/schbul/sbac008
|View full text |Cite
|
Sign up to set email alerts
|

What’s That Noise? Interpreting Algorithmic Interpretation of Human Speech as a Legal and Ethical Challenge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Realizing NLP's potential for measuring aspects of psychosis will require large amounts of complex data collected from geographically and culturally diverse groups 75 , and coordination between multidisciplinary and international groups 76,77 . A comprehensive psychometrics strategy will be a critical part of this endeavor.…”
Section: Discussionmentioning
confidence: 99%
“…Realizing NLP's potential for measuring aspects of psychosis will require large amounts of complex data collected from geographically and culturally diverse groups 75 , and coordination between multidisciplinary and international groups 76,77 . A comprehensive psychometrics strategy will be a critical part of this endeavor.…”
Section: Discussionmentioning
confidence: 99%
“…While there has been rapid growth and use of these innovations, the ethical principles of how and when they should be employed have not developed at the same rate. Indeed, leveraging natural language processing (NLP) methods for speech analysis in research and applied settings evokes a variety of legal and ethical issues (Hauglid, 2022). As the boundaries between research and practice are often permeable and iterative (especially due to the growth of research–practice partnerships in psychological science), the ethical issues we raise herein pertain to all aspects of research, development, and application.…”
Section: The Importance Of Artificial Intelligence and Language Analysismentioning
confidence: 99%
“…Current standards for algorithmic systems for healthcare purposes emphasize that it is critical to harness “human-in-the-loop” practices—that enable collaboration between humans and machines—as not to do so could be catastrophic (see 10 ; this issue). These structural safeguards—where AI systems act as intelligence augmentation for responsible professionals rather than as artificial intelligence replacing them—certainly can help towards decreasing known disparities that might otherwise emerge in automated systems (see 11 ; this issue), but they will not address the (growing) challenge of what to do when there is a conflict between human judgment and machine, and nor what our expectations of humans should be when these algorithms are implemented into remote monitoring applications. However, these concerns may seem a bit premature since at present—despite a growing number of proof of concept studies—the adoption of these in mainstream assessment is hampered by the notable absence of core research that evaluates the basic psychometric properties of these measures, notably test-retest reliability, divergent validity, systematic biases and the complexity associated with a slew of potential moderators (see 8 ; this issue).…”
mentioning
confidence: 99%