Abstract:Despite its considerable potential in the manufacturing industry, the application of artificial intelligence (AI) in the industry still faces the challenge of insufficient trust. Since AI is a black box with operations that ordinary users have difficulty understanding, users in organizations rely on institutional cues to make decisions about their trust in AI. Therefore, this study investigates trust in AI in the manufacturing industry from an institutional perspective. We identify three institutional dimensio… Show more
“…Some authors argue that helping people overcome the prejudices they have towards AI, by increasing trust in machines, will lead to positive implications for society. A greater acceptance of AI would imply greater use of it [60] in supporting humans when making important decisions (e.g., medical, financial decisions), improving the quality of life (e.g., nutrition, physical exercise, medical screening) and work [61]- [63]. Other authors have raised their concerns, arguing that indiscriminately favoring the acceptance of AI could lead to devastating consequences for society, especially for technologies whose misuse or abuse might turn, in the long term, to be more deleterious than beneficial [64], [65].…”
The latest developments in the field of Artificial Intelligence (AI) have given rise to many ethical and socioeconomic concerns. Nonetheless, the impact of AI technologies is evident and tangible in our everyday life. This dichotomy leads to mixed feelings towards AI: people recognize the positive impact of AI, but they also show concerns, especially about their privacy and security.In this paper, we try to understand whether the implicit and explicit attitudes towards AI are coherent. We investigated explicit and implicit attitudes towards AI by combining a self-report measure and an implicit measure, i.e., the Implicit Association Test. We analysed the explicit and implicit responses of 829 participants. Results revealed that while most of the participants explicitly express a positive attitude towards AI, their implicit responses seem to point in the opposite direction. Results also show that, in both the explicit and implicit measures, females show a more negative attitude than males, and people who work in the field of AI are inclined to be positive towards AI.
“…Some authors argue that helping people overcome the prejudices they have towards AI, by increasing trust in machines, will lead to positive implications for society. A greater acceptance of AI would imply greater use of it [60] in supporting humans when making important decisions (e.g., medical, financial decisions), improving the quality of life (e.g., nutrition, physical exercise, medical screening) and work [61]- [63]. Other authors have raised their concerns, arguing that indiscriminately favoring the acceptance of AI could lead to devastating consequences for society, especially for technologies whose misuse or abuse might turn, in the long term, to be more deleterious than beneficial [64], [65].…”
The latest developments in the field of Artificial Intelligence (AI) have given rise to many ethical and socioeconomic concerns. Nonetheless, the impact of AI technologies is evident and tangible in our everyday life. This dichotomy leads to mixed feelings towards AI: people recognize the positive impact of AI, but they also show concerns, especially about their privacy and security.In this paper, we try to understand whether the implicit and explicit attitudes towards AI are coherent. We investigated explicit and implicit attitudes towards AI by combining a self-report measure and an implicit measure, i.e., the Implicit Association Test. We analysed the explicit and implicit responses of 829 participants. Results revealed that while most of the participants explicitly express a positive attitude towards AI, their implicit responses seem to point in the opposite direction. Results also show that, in both the explicit and implicit measures, females show a more negative attitude than males, and people who work in the field of AI are inclined to be positive towards AI.
“…The institutional theory is the lens through which this study focused. Previous studies attempted to explain the institutional adoption of intervention programs using the institutional framework (Li et al, 2021). According to institutional proponents, organizations and society must strictly adhere to societal expectations of an acceptable practice to benefit from the ongoing support they require for sustenance from their citizenry (Kılıç et The network theory of the benefits weaker countries might obtain from complying with the social rules of stronger nations is embedded in the institutional theory (Robertson et al, 2021).…”
Section: Institutional Theory Of Tetfund Astandd Fundingmentioning
confidence: 99%
“…Thus, developing countries establishing education intervention funding agents such as TETFUND would derive legitimate benefits from the international education intervention framework's institutional network (Robertson et al, 2021). The institutional theory allows the researcher to comprehend why countries such as Nigeria would emulate and conform to acceptable norms of education intervention practices in the developed countries for their intervention fund agencies due to the perceived benefits of those funds to their beneficiaries and stakeholders (Li et al, 2021). In this regard, normative pressures enable Nigeria to replicate existing foreign intervention models to conform to global education intervention funding standards and share the collective value they bring to beneficiaries.…”
Section: Institutional Theory Of Tetfund Astandd Fundingmentioning
confidence: 99%
“…The study expands on the institutional framework to argue that the propensity of developing countries to emulate successful and legitimate developed nations by funding higher education initiatives through intervention funding is known as imitative logically equivalent pressures(Li et al, 2021). A region's exposure to bilateral education programs and trade exchanges exposes nations to imitations of best practices in human capital development(Li et al, 2021). (Kılıç et al, 2021; Robertson et al, 2021) also found a connection between sharing of educational resources and the advancement in education.…”
The study looked at the relationship between TETFUND AST&D beneficiaries’ satisfaction with the benefit and the obstacles associated with the funding for academic staff training and development. The study’s population was drawn from Abdu Gusau Polytechnic in the Northwest, Nigeria. Twenty of the thirty structured questionnaires were returned as valid and were used in the study. The descriptive statistics were obtained by SPSS 2023, and the hypothesis was tested using Spearman’s correlation coefficients. Statistics reveal a totally positive and statistically significant association between the satisfaction of AST&D recipients and the benefits and difficulties. Second, the interaction between AST&D benefits and challenges was strongly positive and statistically significant, indicating that the challenges had no effect on the benefits. The study recommends that TETFUND should entice local funding to attract applicants. Balancing the disparity between local and international AST&D funds will attract more local trainees and will also increase Nigeria’s assets denominated in foreign currency (foreign reserves) that are held by the Central Bank of Nigeria (CBN).
“…All trust dimensions can influence the probability that an agent will successfully execute a task. Trust can also be influenced by physical characteristics such as human-likeness of a robot 45 , the type of agent and context 46 , and institutional perspectives 47 . In this paper, we simplify the trust estimate by considering only the dimension of capability, since trust in automation primarily focuses on performance 41 and robot performance is an important and strong contributor to trust in human–robot interaction (HRI) 48 , 49 .…”
Effective human–robot collaboration requires the appropriate allocation of indivisible tasks between humans and robots. A task allocation method that appropriately makes use of the unique capabilities of each agent (either a human or a robot) can improve team performance. This paper presents a novel task allocation method for heterogeneous human–robot teams based on artificial trust from a robot that can learn agent capabilities over time and allocate both existing and novel tasks. Tasks are allocated to the agent that maximizes the expected total reward. The expected total reward incorporates trust in the agent to successfully execute the task as well as the task reward and cost associated with using that agent for that task. Trust in an agent is computed from an artificial trust model, where trust is assessed along a capability dimension by comparing the belief in agent capabilities with the task requirements. An agent’s capabilities are represented by a belief distribution and learned using stochastic task outcomes. Our task allocation method was simulated for a human–robot dyad. The team total reward of our artificial trust-based task allocation method outperforms other methods both when the human’s capabilities are initially unknown and when the human’s capabilities belief distribution has converged to the human’s actual capabilities. Our task allocation method enables human–robot teams to maximize their joint performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.