Sophisticated computer codes that implement mathematical models of physical processes can involve large numbers of inputs, and screening to determine the most active inputs is critical for understanding the inputoutput relationship. This article presents a new two-stage group screening methodology for identifying active inputs. In Stage 1, groups of inputs showing low activity are screened out; in Stage 2, individual inputs from the active groups are identified. Inputs are evaluated through their estimated total (effect) sensitivity indices (TSIs), which are compared with a benchmark null TSI distribution created from added low noise inputs. Examples show that, compared with other procedures, the proposed method provides more consistent and accurate results for high-dimensional screening. Additional details and computer code are provided in supplementary materials available online.
In this paper we study systemic risks in the Korean banking sector by using two famous systemic risk measuresthe MES (marginal expected shortfall) and CoVaR. To compute both measures we employ Engle's dynamic conditional correlation model. Our empirical analysis shows, first, that although these two systemic risk measures differ in defining the contributions to systemic risk, both are qualitatively very similar in explaining the cross-sectional differences in systemic risk contributions across banks. Second, we find that systemic risk contributions are closely related to certain bank characteristic variables (e.g., VaR (value at risk), size and leverage ratio). However, there are differences between the cross-sectional and the time series dimensions in the effects of these variables. Last, using a threshold VAR model, we suggest an overall systemic risk measurethe aggregate MESand its associated threshold value for use as an early warning indicator.
This paper has conducted analyzing the accident case of data spill to study policy issues for ICT security from a social science perspective focusing on risk. The results from case analysis are as follows. First, ICT risk can be categorized 'severe, strong, intensive and individual' from the level of both probability and impact. Second, strategy of risk management can be designated 'avoid, transfer, mitigate, accept' by understanding their own culture type of relative group such as 'hierarchy, egalitarianism, fatalism and individualism'. Third, personal data has contained characteristics of big data such like 'volume, velocity, variety' for each risk situation. Therefore, government needs to establish a standing organization responsible for ICT risk policy and management in a new big data era. And the policy for ICT risk management needs to balance in considering 'technology, norms, laws, and market' in big data era.
Specific formulae are derived for quadrature-based estimators of global sensitivity indices when the unknown function can be modeled by a regression plus stationary Gaussian process using the Gaussian, Bohman, or cubic correlation functions. Estimation formulae are derived for the computation of process-based Bayesian and empirical Bayesian estimates of global sensitivity indices when the observed data are the function values corrupted by noise. It is shown how to restrict the parameter space for the compactly supported Bohman and cubic correlation functions so that (at least) a given proportion of the training data correlation entries are zero. This feature is important in the situation where the set of training data is large. The estimation methods are illustrated and compared via examples.
The purpose of this paper is as follows. First, I am trying to conceptualize big data as a social problem. Second, I would like to explain the difference between big data and conventional mega information. Third, I would like to recommend the role of the government for utilization of big data as a policy tools. Fourth, while referring to copyright and CCL(Creative Commons License) cases, I would like to explain the regulation for big data on data sovereignty. Finally, I would like to suggest a direction of policy design for big data.As for the result of this study, policy design for big data should be distinguished from policy design for mega information to solve data sovereignty issues. From a law system perspective, big data is generated autonomously. It has been accessed openly and shared without any intention. In market perspective, big data is created without any intention. Big data can be changed automatically in case of openness with reference feature such as Linked of Data. Some policy issues such as responsibility and authenticity should be raised. Big data is generated in a distributed and diverse way without any concrete form in technology perspective. So, we need a different approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.