The value added tax is an important part of revenues of the European Union and its Member States. The aim of the paper is to statistically analyse the extent of positive impact of selected legislative measures introduced in the fight against tax evasion and discuss subsequently the sustainability of the current value added tax system in the European context. The analysis was conducted for the Czech and Slovak Republics, two traditionally strong trading partners, and for an important commodity, copper. In the analysis, regression methods applied to official time series data on copper export from the Czech Republic to Slovakia were employed together with appropriate statistical tests to detect potential significance of the new legislative tools, the value added tax control statement and reverse charge mechanism. Moreover, the study considers fundamental economic factors that affect foreign trade in parallel. Based on the analysis, there is sound evidence that the major historical turnaround experienced by the time series took place due to the then forthcoming legislative measures that were to restrain the possibility of carousel frauds. The results confirm the positive impact of the measures and also suggest the necessity of more systematic changes in the tax system.
The article deals with the quality of measured data, which is necessary for effective quality management and successful implementation of the concept Industry 4.0 and the related concept Quality 4.0. The quality of the measured data is determined by the properties of the used measurement system, which are evaluated by measurement system analysis (MSA). Attention is paid to increasing the effectiveness of the repeatability and reproducibility analysis often used in practice. The importance of graphical tools of analysis, which are often neglected in practice, is emphasized in this regard, and new or modified graphical tools are proposed. The proposed graphical tools allow more detailed analysis of the data collected for the study and reveal the causes of the measurement system variability. The information obtained by applying these graphical tools is a valuable basis for proposals of appropriate actions to improve the measurement system. The use of the proposed graphical tools is presented in a real study of the repeatability and reproducibility of the measurement system.
<p><strong>Purpose:</strong> The paper centres on process capability and its relation to data contamination. Process capability may be distorted due to imprecise data. The paper analyses to what extent capability changes reflect problems in data so that the changes can be attributed to data sampling rather than the true performance of the process. This is important because it is usually much simpler to increase the precision of data sampling than the process itself.</p><p><strong>Methodology/Approach:</strong> The paper has two major parts. In part one, effect of data contamination on the observed process characteristic is analysed. The effect is analysed using data obtained from simulated random drawings and the chi-squared test. In the other part, reaction of capability to data contamination is observed. The capability is measured by a univariate capability index.</p><p><strong>Findings:</strong> Regarding the sensitivity of the index to contamination, it is different depending on the capability before the contamination. This leads to conclusions about when the company using the index should focus more on the way the data is measured, and when it should focus more on improving the process in question. The analysis shows that if the company is used to high levels of capability and records its drop, it is worth analysing its measurement system first, as the index is at higher levels more sensitive to data contamination.</p><p><strong>Research Limitation/implication:</strong> The study concerns a single univariate index, and the contamination is modelled with only several probability distributions. </p><p><strong>Originality/Value of paper:</strong> The findings are not difficult to detect, but are not known in practice where companies do not realize that problems with their process capability may sometimes lie in the data they use and not in the process itself.</p>
<p><strong>Purpose:</strong> This paper analyses a problem that originates in the weighted-average model, a mathematical construct introduced by the theory of multicriteria decision-making that can be used to detect what product a customer desires. The problem occurs because the model needs to know the weight the customer assigns to each product feature, aside from the levels of all the product characteristics, in order to calculate the overall value of the product. And since by one approach the weights can be estimated by optimization, the question arises which optimization criterion to select for the procedure, as different criteria will lead to different weights and thus to different product evaluations. The paper analyses the problem in connection with the so-called consistency of pairwise comparisons, which are utilized in the optimization and describe how much the customer prefers one product feature to another. The analysis shows that the problem of which criterion to use to calculate the weights can be eliminated if the pairwise comparisons are consistent. The analysis is performed within pre-defined criteria and is supplemented with case studies supporting the findings.</p><p><strong>Methodology/Approach:</strong> Linear algebra, optimization techniques, case studies.</p><p><strong>Findings:</strong> The results represent a prescription customers can use if they want to avoid the pitfalls of selecting a specific optimization criterion when informing the product maker about what they want based on the weighted-average model.</p><p><strong>Research Limitation/Implication:</strong> The results are related to a specific decision-making model, although that model is still very general and natural.</p><strong>Originality/Value of paper:</strong> The problem of selecting an optimization criterion to determine decision weights is not discussed in the theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.