SYNOPSIS In this paper we argue for the use of Big Data as complementary audit evidence. We evaluate the applicability of Big Data using the audit evidence criteria framework and provide cost-benefit analysis for sufficiency, reliability, and relevance considerations. Critical challenges, including integration with traditional audit evidence, information transfer issues, and information privacy protection, are discussed and possible solutions are provided.
Accounting scandals like Enron (2001) and Petrobas (2014) remind us that untrustworthy financial information has an adverse effect on the stability of the economy and can ultimately be a source of systemic risk. This financial information is derived from processes and their related monetary flows within a business. But as the flows are becoming larger and more complex, it becomes increasingly difficult to distill the primary processes for large amounts of transaction data. However, by extracting the primary processes we will be able to detect possible inconsistencies in the information efficiently. We use recent advances in network embedding techniques that have demonstrated promising results regarding node classification problems in domains like biology and sociology. We learned a useful continuous vector representation of the nodes in the network which can be used for the clustering task, such that the clusters represent the meaningful primary processes. The results show that we can extract the relevant primary processes which are similar to the created clusters by a financial expert. Moreover, we construct better predictive models using the flows from the extracted primary processes which can be used to detect inconsistencies. Our work will pave the way towards a more modern technology and data-driven financial audit discipline.
This article was retracted due to concerns that J. Hunton may have fabricated survey data.
SUMMARY: Widely used probability-proportional-to-size (PPS) selection methods are not well adapted to circumstances requiring sample augmentation. Limitations include: (1) an inability to augment selections while maintaining PPS properties, (2) a failure to recognize changes in census stratum membership which result from sample augmentation, and (3) imprecise control over line item sample size. This paper presents a new method of PPS selection, a modified version of sieve sampling which overcomes these limitations. Simulations indicate the new method effectively maintains sampling stratum PPS properties in single- and multi-stage samples, appropriately recognizes changes in census stratum membership which result from sample augmentation, and provides precise control over line item sample sizes. In single-stage applications the method provides reliable control of sampling risk over varied tainting levels and error bunching patterns. Tightness and efficiency measures are comparable to randomized systematic sampling and superior to sieve sampling.
The series "Advances in Intelligent Systems and Computing" contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing.The publications within "Advances in Intelligent Systems and Computing" are primarily textbooks and proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results.
Despite technological advances in accounting systems and audit techniques, sampling remains a commonly used audit tool. For critical estimation applications involving low error rate populations, stratified mean-per-unit sampling (SMPU) has the unique advantage of producing trustworthy confidence intervals. However, SMPU is less efficient than other classical sampling techniques because it requires a larger sample size to achieve comparable precision. To address this weakness, we investigated how SMPU efficiency can be improved via three key design choices: (a) stratum boundary selection method, (b) number of sampling strata, and (c) minimum stratum sample size. Our tests disclosed that SMPU efficiency varies significantly with stratum boundary selection method. An iterative search-based method yielded the best efficiency, followed by the Dalenius–Hodges and Equal-Value-Per-Stratum methods. We also found that variations in Dalenius–Hodges implementation procedures yielded meaningful differences in efficiency. Regardless of boundary selection method, increasing the number of sampling strata beyond levels recommended in the professional literature yielded significant improvements in SMPU efficiency. Although a minor factor, smaller values of minimum stratum sample size were found to yield better SMPU efficiency. Based on these findings, suggestions for improving SMPU efficiency are provided. We also present the first known equations for planning the number of sampling strata given various application-specific parameters.
Auditing is a multi-billion dollar market, with auditors assessing the trustworthiness of financial data, contributing to financial stability in a more interconnected and faster-changing world. We measure cross-sectoral structural similarities between firms using microscopic real-world transaction data. We derive network representations of companies from their transaction datasets, and we compute an embedding vector for each network. Our approach is based on the analysis of 300+ real transaction datasets that provide auditors with relevant insights. We detect significant changes in bookkeeping structure and the similarity between clients. For various tasks, we obtain good classification accuracy. Moreover, closely related companies are near in the embedding space while different industries are further apart suggesting that the measure captures relevant aspects. Besides the direct applications in computational audit, we expect this approach to be of use at multiple scales, from firms to countries, potentially elucidating structural risks at a broader scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.