There is a growing literature on the concept of e-trust and on the feasibility and advisability of ''trusting'' artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and faceto-face trust.
In their important paper ''Autonomous Agents'', Floridi and Sanders use ''levels of abstraction'' to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, ''Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?'' To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.
Purpose -Facebook users are both producers and consumers (i.e. "prosumers"), in the sense that they produce the disclosures that allow for Facebook's business success and they consume services. The purpose of this paper is to examine how best to characterize the commercialized and compliant members. The authors question the Facebook assertion that members knowingly and willingly approve of personal and commercial transparency and argue, instead, that complicity is engineered. Design/methodology/approach -A survey of Facebook users was conducted between December 2010 and April 2011 at one private and four public universities. Respondents were questioned about: the level of their consumer activity on Facebook; their knowledge of Facebook advertiser data sharing practices and their attitude toward such; their use of sharing restrictions and the groups targeted; and their assessment of transparency benefits versus reputation and consumer risks. Findings -No evidence was found to support the Facebook account of happy prosumers. Members reported that they avoided advertisements as much as possible and opposed data sharing/selling practices. However, many respondents were found to be relatively uneducated and passive prosumers, and those expressing a high concern for privacy were no exception. Research limitations/implications -Due to the nonprobability sampling method, the results may lack generalizability. Practical implications -To avoid unwanted commercialization, users of social networking sites must become more aware of data mining and privacy protocols, demand more protections, or switch to more prosumer-friendly platforms. Originality/value -The paper reports empirical findings on Facebook members' prosumption patterns and attitudes.
In this paper we examine the case of Tay, the Microsoft AI chatbot that was launched in March, 2016. After less than 24 hours, Microsoft shut down the experiment because the chatbot was generating tweets that were judged to be inappropriate since they included racist, sexist, and anti-Semitic language. We contend that the case of Tay illustrates a problem with the very nature of learning software (LS is a term that describes any software that changes its program in response to its interactions) that interacts directly with the public, and the developer's role and responsibility associated with it. We make the case that when LS interacts directly with people or indirectly via social media, the developer has additional ethical responsibilities beyond those of standard software. There is an additional burden of care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.