Purpose This paper aims to investigate how digital capabilities associated with building information modelling (BIM) can integrate a wide range of information to improve built asset management (BAM) decision-making during the in-use phase of hospital buildings. Design/methodology/approach A comprehensive document analysis and a participatory case study was undertaken with a regional NHS hospital to review the type of information that can be used to better inform BAM decision-making to develop a conceptual framework to improve information use during the health-care BAM process, test how the conceptual framework can be applied within a BAM division of a health-care organisation and develop a cloud-based BIM application. Findings BIM has the potential to facilitate better informed BAM decision-making by integrating a wide range of information related to the physical condition of built assets, resources available for BAM and the built asset’s contribution to health-care provision within an organisation. However, interdepartmental information sharing requires a significant level of time and cost investment and changes to information gathering and storing practices within the whole organisation. Originality/value This research demonstrated that the implementation of BIM during the in-use phase of hospital buildings is different to that in the design and construction phases. At the in-use phase, BIM needs to integrate and communicate information within and between the estates, facilities division and other departments of the organisation. This poses a significant change management task for the organisation’s information management systems. Thus, a strategically driven top-down organisational approach is needed to implement BIM for the in-use phase of hospital buildings.
Deep reinforcement learning (DRL) has transformed the field of artificial intelligence (AI) especially after the success of Google DeepMind. This branch of machine learning epitomizes a step toward building autonomous systems by understanding of the visual world. Deep reinforcement learning (RL) is currently applied to different sorts of problems that were previously obstinate. In this chapter, at first, the authors started with an introduction of the general field of RL and Markov decision process (MDP). Then, they clarified the common DRL framework and the necessary components RL settings. Moreover, they analyzed the stochastic gradient descent (SGD)-based optimizers such as ADAM and a non-specific multi-policy selection mechanism in a multi-objective Markov decision process. In this chapter, the authors also included the comparison for different Deep Q networks. In conclusion, they describe several challenges and trends in research within the deep reinforcement learning field.
This article addresses the problem that the remaining useful life (RUL) prediction accuracy for a high-speed rail catenary is not accurate enough, leading to costly and time-consuming periodic planned and reactive maintenance costs. A new method for predicting the RUL of a catenary is proposed based on the Bayesian optimization stacking ensemble learning method. Taking the uplink and downlink catenary data of a high-speed railway line as an example, the preprocessed historical maintenance and maintenance data are input into the integrated prediction model of Bayesian hyperparameter optimization for training, and the root mean square error (RMSE) of the final optimized RUL prediction result is 0.068, with an R-square (R2) of 0.957, and a mean absolute error (MAE) of 0.053. The calculation example results show that the improved stacking ensemble algorithm improves the RMSE by 28.42%, 30.61% and 32.67% when compared with the extreme gradient boosting (XGBoost), support vector machine (SVM) and random forests (RF) algorithms, respectively. The improved accuracy prediction lays the foundation for targeted equipment maintenance and system maintenance performed before the catenary system fails, thus potentially saving both planned and reactive maintenance costs and time.
Deep reinforcement learning (DRL) has transformed the field of artificial intelligence (AI) especially after the success of Google DeepMind. This branch of machine learning epitomizes a step toward building autonomous systems by understanding of the visual world. Deep reinforcement learning (RL) is currently applied to different sorts of problems that were previously obstinate. In this chapter, at first, the authors started with an introduction of the general field of RL and Markov decision process (MDP). Then, they clarified the common DRL framework and the necessary components RL settings. Moreover, they analyzed the stochastic gradient descent (SGD)-based optimizers such as ADAM and a non-specific multi-policy selection mechanism in a multi-objective Markov decision process. In this chapter, the authors also included the comparison for different Deep Q networks. In conclusion, they describe several challenges and trends in research within the deep reinforcement learning field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.