A new fermionic formula for the unrestricted Kostka polynomials of type $A_{n-1}^{(1)}$ is presented. This formula is different from the one given by Hatayama et al. and is valid for all crystal paths based on Kirillov-Reshetihkin modules, not just for the symmetric and anti-symmetric case. The fermionic formula can be interpreted in terms of a new set of unrestricted rigged configurations. For the proof a statistics preserving bijection from this new set of unrestricted rigged configurations to the set of unrestricted crystal paths is given which generalizes a bijection of Kirillov and Reshetikhin.Comment: 35 pages; reference adde
The dynamic effect in two-phase flow in porous media indicated by a dynamic coefficient τ depends on a number of factors (e.g. medium and fluid properties). Varying these parameters parametrically in mathematical models to compute τ incurs significant time and computational costs. To circumvent this issue, we present an artificial neural network (ANN)-based technique for predicting τ over a range of physical parameters of porous media and fluid that affect the flow. The data employed for training the ANN algorithm have been acquired from previous modeling studies. It is observed that ANN modeling can appropriately characterize the relationship between the changes in the media and fluid properties, thereby ensuring a reliable prediction of the dynamic coefficient as a function of water saturation. Our results indicate that a double-hidden-layer ANN network performs better in comparison to the single-hidden-layer ANN models for the majority of the performance tests carried out. While single-hidden-layer ANN models can reliably predict complex dynamic coefficients (e.g. water saturation relationships) at high water saturation content, the double-hidden-layer neural network model outperforms at low water saturation content. In all the cases, the single- and double-hidden-layer ANN models are better predictors in comparison to the regression models attempted in this work.
ABSTRACT. NHS Trusts in England must adopt appropriate levels of continued investment in routine and backlog maintenance if they are to ensure critical backlog does not accumulate. This paper presents the current state of critical backlog maintenance within the National Health Service (NHS) in England through the statistical analyses of 115 Acute NHS Trusts. It aims to find empirical support for a causal relationship between building portfolio age and year-on-year increases in critical backlog. It makes recommendations for the use of building portfolio age in strategic asset management. The current trend across this sample of NHS Trusts may be typical of the whole NHS built asset portfolio and suggests that most Trusts need to invest between 0.5 and 1.5 per cent of income (depending upon current critical backlog levels and Trust age profile) to simply maintain critical backlog levels. More robust analytics for building age, condition and risk-adjusted backlog maintenance are required.
The objective of an accident-mapping algorithm is to snap traffic accidents onto the correct road segments. Assigning accidents onto the correct segments facilitate to robustly carry out some key analyses in accident research including the identification of accident hot-spots, network-level risk mapping and segment-level accident risk modelling. Existing risk mapping algorithms have some severe limitations: (i) they are not easily 'transferable' as the algorithms are specific to given accident datasets; (ii) they do not perform well in all road-network environments such as in areas of dense road network; and (iii) the methods used do not perform well in addressing inaccuracies inherent in and type of road environment. The purpose of this paper is to develop a new accident mapping algorithm based on the common variables observed in most accident databases (e.g. road name and type, direction of vehicle movement before the accident and recorded accident location). The challenges here are to: (i) develop a method that takes into account uncertainties inherent to the recorded traffic accident data and the underlying digital road network data, (ii) accurately determine the type and proportion of inaccuracies, and (iii) develop a robust algorithm that can be adapted for any accident set and road network of varying complexity. In order to overcome these challenges, a distance based pattern-matching approach is used to identify the correct road segment. This is based on vectors containing feature values that are common in the accident data and the network data. Since each feature does not contribute equally towards the identification of the correct road segments, an ANN approach using the single-layer perceptron is used to assist in "learning" the relative importance of each feature in the distance calculation and hence the correct link identification. The performance of the developed algorithm was evaluated based on a reference accident dataset from the UK confirming that the accuracy is much better than other methods.
The safety, mobility, environmental, energy, and economic benefits of transportation systems, which are the focus of recent Connected Vehicles (CVs) programs, are potentially dramatic. However, realization of these benefits largely hinges on the timely integration of the digital technology into the existing transportation infrastructure. CVs must be enabled to broadcast and receive data to and from other CVs (Vehicle-to-Vehicle, or V2V, communication), to and from infrastructure or V2I, communication) and to and from other road users, such as bicyclists or pedestrians (Vehicle-to-Other road users communication). Further, for V2I-focused applications, the infrastructure and the transportation agencies that manage it must be able to collect, process, distribute, and archive these data quickly, reliably, and securely. This paper focuses V2I applications, and studies current digital roadway infrastructure initiatives. It highlights the importance of including digital infrastructure investment alongside investment in more traditional transportation infrastructure to keep up with the auto industry's push towards connecting vehicles to other vehicles. By studying the current CV testbeds and Smart City initiatives, this paper identifies digital infrastructure components (i.e., communication options and computing infrastructure) being used by public agencies. It also examines public agencies' limited budgeting for digital infrastructure, and finds current expenditure is inadequate for realizing the potential benefits of V2I applications. Finally, the paper presents a set of recommendations, based on a review of current practices and future needs, designed to guide agencies responsible for transportation infrastructure. It stresses the importance of collaboration for establishing national and international platforms for the planning, deployment, and management of digital infrastructure to support connected transportation systems across political jurisdictions. potential for vehicles to "talk" to each other (via Vehicle-to-Vehicle communication, or V2V), to pedestrians (via Vehicle-to-Pedestrian communication, or V2P) as well as to transportation infrastructure (via Vehicle-to-Infrastructure communication, or V2I). Potential benefits from real-time communication between the elements of the transportation system are dramatic (Chang et al., 2015, He et al., 2012. For example, Connected Vehicles, or CVs (also referred to as "Vehicles with Connectivity"), which broadcast their data to infrastructure and other vehicles, could give drivers advance warning of impending collisions in time to avert dangerous circumstances, dramatically reducing crash damage, injuries, and fatalities. V2I connectivity between vehicles and "digital roadways," which feature roadside devices and backend computation infrastructure, could ensure safe and efficient traffic management in real time, which is not present on public roads today. CVs can benefit the environment with 9,400 tons of annual emission savings for an area covering 45 kilo-meters (28...
In recent years cybersecurity attacks have caused major disruption and information loss for online organisations, with high profile incidents in the news. One of the key challenges in advancing the state of the art in intrusion detection is the lack of representative datasets. These datasets typically contain millions of time-ordered events (e.g. network packet traces, flow summaries, log entries); subsequently analysed to identify abnormal behavior and specific attacks [1]. Generating realistic datasets has historically required expensive networked assets, specialised traffic generators, and considerable design preparation. Even with advances in virtualisation it remains challenging to create and maintain a representative environment. Major improvements are needed in the design, quality and availability of datasets, to assist researchers in developing advanced detection techniques. With the emergence of new technology paradigms, such as intelligent transport and autonomous vehicles, it is also likely that new classes of threat will emerge [2]. Given the rate of change in threat behavior [3] datasets become quickly obsolete, and some of the most widely cited datasets date back over two decades. Older datasets have limited value: often heavily filtered and anonymised, with unrealistic event distributions, and opaque design methodology. The relative scarcity of (Intrusion Detection System) IDS datasets is compounded by the lack of a central registry, and inconsistent information on provenance. Researchers may also find it hard to locate datasets or understand their relative merits. In addition, many datasets rely on simulation, originating from academic or government institutions. The publication process itself often creates conflicts, with the need to de-identify sensitive information in order to meet regulations such as General Data Protection Act (GDPR) [4]. Another final issue for researchers is the lack of standardised metrics with which to compare dataset quality. In this paper we attempt to classify the most widely used public intrusion datasets, providing references to archives and associated literature. We illustrate their relative utility and scope, highlighting the threat composition, formats, special features, and associated limitations. We identify best practice in dataset design, and describe potential pitfalls of designing anomaly detection techniques based on data that may be either inappropriate, or compromised due to unrealistic threat coverage. Such contributions as made in this paper is expected to facilitate continuous research and development for effectively combating the constantly evolving cyber threat landscape.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers