2021
DOI: 10.1057/s41271-021-00319-5
|View full text |Cite
|
Sign up to set email alerts
|

Advancing health equity with artificial intelligence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(31 citation statements)
references
References 35 publications
2
29
0
Order By: Relevance
“…It is critical for policy-makers to understand that bias mitigation should not end with AI model development but, rather, extend across the product lifecycle. We believe in line with Thomasian et al 53…”
Section: Important Considerationssupporting
confidence: 92%
See 1 more Smart Citation
“…It is critical for policy-makers to understand that bias mitigation should not end with AI model development but, rather, extend across the product lifecycle. We believe in line with Thomasian et al 53…”
Section: Important Considerationssupporting
confidence: 92%
“…The largest concern surrounding AI solutions is the potential for systems to continue perpetuating inequities. [6][7][8]11,12,39,40 Thus, AI initiatives should have two main goals: (1) they should be designed and utilized in a manner that does not create or maintain health disparities currently experienced by vulnerable groups, and (2) they should address and remove existing health disparities. 6,39 To ensure that all healthcare-based AI embodies these two goals, it is important to create system level changes such as a federal and/or provincial regulatory framework that oversees the equity dimensions in the implementation of AI solutions.…”
Section: Equity Assessmentmentioning
confidence: 99%
“…AI has been criticized for being “no more than human behaviour reflected back to us” [ 98 ]. Inherent to this argument is the ability of AI to “reflect the biases present in our collective conscience” [ 70 ]. The discourse on guidelines to rectify and prioritize health equity in the development life cycle of an AI system [ 99 ], as well as around ethical AI applications generally, is becoming more prolific.…”
Section: Discussionmentioning
confidence: 99%
“…Lucidchart allowed non–computing science team members (eg, psychiatry and lived experience experts) to communicate necessary chatbot conversational flow behaviors to the computing science team clearly and effectively, including any emergency- or urgency-related prompts and responses. The open-source use of big data has been significantly criticized for perpetuating systemic racism and societal inequalities [ 70 ]. As such, developing training data using our multidisciplinary team was important to ensure that chatbot behavior remained respectful and reduce existing issues inherent in big data.…”
Section: Methodsmentioning
confidence: 99%
“…The use of artificial intelligence (AI) in clinical care and public health contexts has expanded rapidly in recent years [1][2][3][4][5][6], including throughout the COVID-19 pandemic [7][8][9][10][11][12][13][14][15]. While emerging AI applications have the potential to improve health care quality and fairness [16][17][18][19][20][21], they may alternatively perpetuate or exacerbate inequities if they are not designed, deployed, and monitored appropriately [22][23][24][25][26].…”
Section: Background and Rationalementioning
confidence: 99%