Many, possibly most, analytical measurements are carried out to assess compliance with a specification or a regulation, for example in the control contaminants in food or the detection of performance enhancing substances in sport. When making an assessment of compliance the presence of unavoidable measurement uncertainty introduces the risk of making incorrect decisions, that is of accepting a batch of material which is outside the specification or rejecting one that is within. This often leads to controversy over whether or not the compliance decision is correct. How to make reliable assessment decisions is described in the EURACHEM/CITAC Guide ''Use of uncertainty information in compliance assessment''. The key is the use of decision rules that lead to an unambiguous interpretation of the measurement result and its uncertainty. These decision rules need to be designed to ensure that requirements of the specification or regulation are met and that the risk of making an incorrect decision is acceptable. Ideally they should form part of the specification or regulation.
In recent years, there has been a dramatic change in attitude towards computers and the use of computer resources in general. Cloud and Edge computing have emerged as the most widely used technologies, including fog computing and the Internet of Things (IoT). There are several benefits in exploiting Cloud and Edge computing paradigms, such as lower costs and higher efficiency. It provides data computation and storage where data are processed, enables better data control, faster understanding and actions, and continuous operation. However, though these benefits seem to be appealing, their effects on computer forensics are somewhat undesirable. The complexity of the Cloud and Edge environments and their key features present many technical challenges from multiple stakeholders. This paper seeks to establish an in-depth understanding of the impact of Cloud and Edge computing-based environmental factors. Software and hardware tools used in the digital forensic process, forensic methods for handling tampered sound files, hidden files, image files, or images with steganography, etc. The technical/legal challenges and the open design problems (such as distributed maintenance, multitasking and practicality) highlight the various challenges for the digital forensics process.
This was the topic of a workshop held in Lisbon in June 2011, which was organised by the Faculty of Sciences of the University of Lisbon, EURACHEM-Portugal and RELACRE, the Portuguese Association of Accredited Laboratories, on behalf of the EURACHEM/CITAC Measurement Uncertainty and Traceability Working Group. This topical issue of ACQUAL contains a selection of the contributed papers at this workshop. These papers and the posters presented at the workshop show how the evaluation of uncertainty is now being applied to a wide range of analyses. Also, it is interesting to see in how many cases the evaluation used method performance data and measurements on reference materials. There were also a number of invited speakers who reported on recent developments and these together with some of the items arising from the discussion are summarised below. Measurement results near zeroThe evaluation of measurement uncertainty for results close to zero is a problem that has been of concern for some time. It was not considered in the ISO Guide to the Expression of Uncertainty in Measurement (GUM) [1]. When the results are close to zero, the uncertainty interval calculated according to the procedures given in GUM could include values below zero, even when the measurand is, for example, a concentration which by definition cannot take a negative value. Stephen L R Ellison in his presentation giving an overview of the revised EURACHEM/ CITAC guide ''Quantifying Uncertainty in Analytical Measurement'' [2] pointed out that the guide now has a much larger section on how uncertainty can be evaluated and reported in this case. Two procedures are described one utilising classical statistics [3] and the other utilising Bayes theorem [4].The procedure for calculating the coverage interval using classical statistics is quite simple. If the expanded uncertainty has been calculated to have for example a 95 % coverage and would have extended below zero, then it is just truncated at zero and this truncated classical confidence interval maintains exact 95 % coverage. This truncated interval becomes progressively more asymmetric as the result approaches zero. However, the observed mean is the best estimate of the value of the measurand until the observed mean falls below zero, when the value of zero should be used. Also, as the observed mean value falls further below zero, the simple truncated interval becomes unreasonably small, but results in this region may indicate that something is wrong with the measurement.The Bayesian method allows the combination of information from the measurements with the information that the value of the concentration cannot be negative. For measurement results that can be described in the form of a t-distribution, then as shown in [4], the resulting probability distribution of the values attributable to the measurand is approximately a truncated t-distribution. The observed mean or zero if the observed mean value is below zero should again be used as the reported value, and the expanded uncertainty interval is cal...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.