Social media donation is an emerging sustainable business model. Donation in the context of social media can often bring benefits to content creators and social media platforms, as well as realizing their sustainable development. Based on attachment theory, customer loyalty theory, and interaction ritual chains theory, this paper studies how feedback interaction and participatory interaction affect users’ continued intent to donate. The role of users’ emotion and price consciousness are mainly considered. Data were collected through questionnaires, and the sample covered 466 WeChat users. Structural equation modeling and linear regression were used to test the hypothesis. It was found that emotional attachment and emotional loyalty had significant positive effects on users’ continued intent to donate, and participatory interaction had significant positive effects on emotional attachment and emotional loyalty, while feedback interaction had a significant positive effect on emotional attachment. Price consciousness did not directly affect continued intent to donate, but as a moderator, it weakened the positive relationship between emotional attachment and continued intent to donate.
The purpose of this paper is to investigate how Artificial Intelligence (AI) decision-making transparency affects humans’ trust in AI. Previous studies have shown inconsistent conclusions about the relationship between AI transparency and humans’ trust in AI (i.e., a positive correlation, non-correlation, or an inverted U-shaped relationship). Based on the stimulus-organism-response (SOR) model, algorithmic reductionism, and social identity theory, this paper explores the impact of AI decision-making transparency on humans’ trust in AI from cognitive and emotional perspectives. A total of 235 participants with previous work experience were recruited online to complete the experimental vignette. The results showed that employees’ perceived transparency, employees’ perceived effectiveness of AI, and employees’ discomfort with AI played mediating roles in the relationship between AI decision-making transparency and employees’ trust in AI. Specifically, AI decision-making transparency (vs. non-transparency) led to higher perceived transparency, which in turn increased both effectiveness (which promoted trust) and discomfort (which inhibited trust). This parallel multiple mediating effect can partly explain the inconsistent findings in previous studies on the relationship between AI transparency and humans’ trust in AI. This research has practical significance because it puts forward suggestions for enterprises to improve employees’ trust in AI, so that employees can better collaborate with AI.
Artificial intelligence (AI) is being increasingly used as a decision agent in enterprises. Employees’ appraisals and AI affect the smooth progress of AI–employee cooperation. This paper studies (1) whether employees’ challenge appraisals, threat appraisals and trust in AI are different for AI transparency and opacity. (2) This study investigates how AI transparency affects employees’ trust in AI through employee appraisals (challenge and threat appraisals), and (3) whether and how employees’ domain knowledge about AI moderates the relationship between AI transparency and appraisals. A total of 375 participants with work experience were recruited for an online hypothetical scenario experiment. The results showed that AI transparency (vs. opacity) led to higher challenge appraisals and trust and lower threat appraisals. However, in both AI transparency and opacity, employees believed that AI decisions brought more challenges than threats. In addition, we found the parallel mediating effect of challenge appraisals and threat appraisals. AI transparency promotes employees’ trust in AI by increasing employees’ challenge appraisals and reducing employees’ threat appraisals. Finally, employees’ domain knowledge about AI moderated the relationship between AI transparency and appraisals. Specifically, domain knowledge negatively moderated the positive effect of AI transparency on challenge appraisals, and domain knowledge positively moderated the negative effect of AI transparency on threat appraisals.
Online searching data reflects consumers’ real footprints in information collection and purchase decision-making processes, which is greatly valued in understanding their needs. This paper which is at the background of China’s automobile market, studies the relationship between online searching data and automobile sales through approaches that differ from existing research to extract keywords. First the online searching data keywords are determined, primarily by using text-mining technology to extract them, and specifically: i) Jieba was used to tokenize crawled automotive forum posts’ text into segmented words; ii) All word-segmented Chinese corpus were segmented into word vector space by Word2vec model; and iii) Similar keywords were discovered by calculating the word vector’s similarity indexes. A fixed effect model was then built based on 108 months of long panel data. Finally, combing with panel vector autoregressive model (PVAR), we used rolling window to predict Chinese automobile sales from January to December 2015.The empirical results demonstrate that: a long equilibrium exists between online searching data and automobile sales; our regression model can explain 76% of the variance. The holdout analysis suggests that online searching data can be of substantial use in forecasting Chinese automobile sales.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.