2023
DOI: 10.1007/s43681-022-00248-3
|View full text |Cite
|
Sign up to set email alerts
|

What should AI see? Using the public’s opinion to determine the perception of an AI

Abstract: Deep neural networks (DNN) have made impressive progress in the interpretation of image data so that it is conceivable and to some degree realistic to use them in safety critical applications like automated driving. From an ethical standpoint, the AI algorithm should take into account the vulnerability of objects or subjects on the street that ranges from “not at all”, e.g. the road itself, to “high vulnerability” of pedestrians. One way to take this into account is to define the cost of confusion of one seman… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 52 publications
0
2
0
Order By: Relevance
“…Thirdly, as the use of AI in education and assessment becomes more prevalent, it is essential that students understand the principles behind the technology in order to maintain academic integrity and prevent cheating as mentioned previously (Chan, 2023;Cotton et al, 2023). An AI education policy can teach students about the ethical considerations surrounding AI, such as bias and fairness, as well as the potential consequences of using AI in academic contexts.…”
Section: Generative Ai and Generative Pre-trained Transformersmentioning
confidence: 99%
“…Thirdly, as the use of AI in education and assessment becomes more prevalent, it is essential that students understand the principles behind the technology in order to maintain academic integrity and prevent cheating as mentioned previously (Chan, 2023;Cotton et al, 2023). An AI education policy can teach students about the ethical considerations surrounding AI, such as bias and fairness, as well as the potential consequences of using AI in academic contexts.…”
Section: Generative Ai and Generative Pre-trained Transformersmentioning
confidence: 99%
“…Sentiment reviews are labeled as positive, neutral, or negative, based on visitor stars. 1 and 2 stars are categorized as negative sentiment, 3 stars as neutral sentiment, and 4 and 5 stars as positive [21] [22].…”
Section: Data Collection and Labelingmentioning
confidence: 99%
“…Most research reviewed for this paper agrees that students should not completely rely on text created by GenAI tools. Furthermore, if AI is to be embraced by academic institutions for student use, effort must be put into protecting academic integrity (Chan, 2023;Chan & Hu, 2023;Kanbar, 2023;Lepik, 2023;Shoufan, 2023;Tossell et al, 2023;Van Wyk, 2024). This provides an opportunity for students and educators to consider the ethics of AI in contemporary academia.…”
Section: Reliabilitymentioning
confidence: 99%
“…Since 2023, many scholarly articles (Bishop, 2023;Chan, 2023;Chan & Hu, 2023;Cotton et al, 2023;Fitria, 2023;Fyfe, 2023;Kanabar, 2023;Mohammadkarimi, 2023;Price & Sakellarios, 2023;Smolansky et al, 2023;Van Wyk, 2024) have been published concerning ethics and assessing the use of AI-generated writing in student assignments. According to Kanabar (2023), in the spring semester of 2023, it was becoming evident that strategies to prohibit GenAI would not be realistic.…”
Section: Ethicsmentioning
confidence: 99%
See 1 more Smart Citation