Advanced audit data analytics tools allow auditors to analyze the entire population of accessible client transactions. Though this approach has measurable benefits for audit efficiency and effectiveness, auditors caution that it does not incrementally increase the level of assurance they can provide relative to the fair presentation of the financial statements. We experimentally examine whether the audit testing methodology (audit data analytics versus traditional sampling) and the type of internal control (ICFR) opinion auditors issue (unqualified versus adverse) are signals of audit quality that affect jurors' perceptions of auditor negligence after an audit failure. We predict and find that jurors' perceptions of auditors' personal control over the audit failure influence their assessment of negligence. We also find that when auditors issue an unqualified ICFR opinion, jurors make higher negligence assessments when auditors employ traditional statistical sampling techniques than when they employ audit data analytics. Lastly, we find that when auditors issue an adverse ICFR opinion, jurors attribute less blame to auditors and correspondingly more blame to management and the investor for an audit failure. Our study informs regulators, practitioners, and academics about the contextual effects of the ICFR opinion as well as the perceived assurance and potential litigation effects of using advanced technological tools in the audit.
We investigate if varying rates of false positives impact auditor skepticism toward red flags identified by data analytic tools. We also examine the extent to which consistent rewards for skepticism can improve the application of skepticism on audits employing data analytics. Using an experiment with practicing auditors we observe that when false positive rates are higher, skepticism levels are low. We also find that consistent rewards for skepticism significantly improve the skepticism of our auditors. However, the positive effect of rewards is limited in that we do not see improvements in skepticism when the false positive rate is higher and additional investigation is less likely to identify a misstatement. Our findings highlight the importance of calibrating analytic tools to reduce false positives and the need for a culture change where appropriate skepticism is consistently rewarded in order for audit firms to effectively use analytic tools to enhance audit quality.
Audit data analytics (ADAs) allow auditors to analyze the entire population of transactions which has measurable benefits for audit quality. However, auditors caution that the level of assurance on the financial statements is not incrementally increased. We examine whether the testing methodology and the type of ICFR opinion issued affect jurors' perceptions of auditor negligence. We predict and find that when auditors issue an unqualified ICFR opinion, jurors make higher negligence assessments when auditors employ statistical sampling than when they employ ADAs. Further, when auditors issue an adverse ICFR opinion, jurors attribute less blame to auditors and more blame to the investor for an audit failure. Additionally, jurors perceive the use of ADAs as an indicator of higher audit quality and are less likely to find auditors negligent. However, jurors do not perceive a difference in the level of assurance provided when auditors use ADAs versus sampling testing methods.
ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.