In this study, we investigate whether receiving an auditor inquiry via e-mail differentially affects client responses as compared to more traditional modes of inquiry, and whether those responses are affected by the auditor's professional tone. In an experiment, experienced business professionals respond to an auditor's information request regarding a potential accounting adjustment. We varied the communication mode of the request (e-mail, audio, or visual) and the professional tone of the communication (more versus less professional) and then measured the extent to which participants revealed information that either supported or did not support the client's accounting position. We find that if an auditor asks for information via e-mail, client responses are more biased towards information that supports the client's position as compared to audio or visual inquiries. In addition, we find that clients respond in a more biased manner when the inquiry is worded in a less professional tone as compared to a more professional tone. Further underscoring the implications of these findings for audit outcomes, our results suggest that if an auditor's request leads clients to provide a response that is biased towards client-supporting information, clients may be less likely to agree with an auditor's proposed income-decreasing adjustment.
Artificial intelligence (AI) and machine learning (ML) are transforming organizations and will soon transform auditing. Many promising areas of AI and ML are within the continuous auditing context. However, the field has yet to recognize how AI and ML can be used for audit inquiry, an essential feature of both traditional audits and continuous auditing. In this research note, we discuss the potential viability of AI-enhanced audit inquiry using “bots” that automatically generate audit inquiries as well as evaluate client responses. In addition, we discuss opportunities for future research in this specific area of automated auditing.
In a globalized audit environment, regulators and researchers have expressed concerns about inconsistent audit quality across nations, with a particular emphasis on Chinese audit quality. Prior research suggests Chinese audit quality may be lower than U.S. audit quality due to a weaker institutional environment (e.g., lower litigation and inspection risk) or cultural value differences (e.g., greater deference to authority). In this study, we propose that lower Chinese audit quality could also be due to Chinese auditors' different cognitive processing styles (i.e., cultural mindsets). We find U.S. auditors are more likely to engage in an analytic mindset approach, focusing on a subset of disconfirming information, whereas Chinese auditors are more likely to take a holistic mindset approach, focusing on a balanced set of confirming and disconfirming information. As a result, Chinese auditors make less skeptical judgments compared to U.S. auditors. We then propose an intervention in which we explicitly instruct auditors to consider using both a holistic and an analytic mindset approach when evaluating evidence. We find this intervention minimizes differences between Chinese and U.S. auditors' judgments by shifting Chinese auditors' attention more towards disconfirming evidence, improving their professional skepticism, while not causing U.S. auditors to become less skeptical. Our study contributes to the auditing literature by identifying cultural mindset differences as a causal mechanism underlying lower professional skepticism levels among Chinese auditors compared to U.S. auditors and providing standard setters and firms with a potential solution that can be adapted to improve Chinese auditors' professional skepticism and reduce cross‐national auditor judgment differences.
ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.